report
stringlengths 320
1.32M
| summary
stringlengths 127
13.7k
|
---|---|
Department of Defense policy states that, to reduce the reaction time and to sustain combat forces until resupply channels are established, war materiel inventories shall be sized, managed, and positioned to maximize flexibility to respond, while minimizing the investment in inventories. The U.S. Army Materiel Command is responsible for managing war materiel, including war reserve spare parts, with policy guidance from the War Reserve Division of the Army’s Office of the Deputy Chief of Staff for Logistics. The Army plans to rely heavily on its specifically designated war reserve weapon systems, equipment, and spare parts when its units arrive in a combat theater of operations. For the Army, war reserves consist of major end items such as trucks and secondary items such as spare parts, food, clothing, medical supplies, and fuel. Spare parts for maintenance represent the largest dollar value of the Army’s war reserve secondary item requirements. War reserves are protected go-to-war assets that are not to be used to improve peacetime readiness or to fill unit shortages. Some of these assets are prepositioned in Southwest Asia, the Pacific, Europe, and on special war reserve ships. The Army would also use available peacetime stocks and what industry could promptly supply. As part of their budget submission process, the services are to develop information on what they need to effectively implement the Department of Defense war materiel inventory policy. During the 1990s, the Army focused on acquiring its major end items for war reserves but funded few associated spare parts. In the Fiscal Years 2000-2005 Program Objective Memorandum for its fiscal year 2000 budget submission, the Army developed plans to fund $265 million for spare parts, with most of the funding planned for the later years. However, for fiscal year 2000, the Army reported that it had obligated $95 million for war reserve spare parts. The Army reports its war reserve status in the Department of Defense’s Quarterly Readiness Report to the Congress. These reports assess each service’s readiness to fight various war scenarios, including the two major theater war scenario. The status of equipment availability and spare parts is included in these assessments. The Department of Defense also prepares an annual report on industry’s capabilities to support the military needs. The U.S. Army Materiel Command is responsible for determining requirements for war reserve spare parts. It uses a computer model to do this. The model takes war-planning guidance from the Department of Defense as well as Army information on anticipated force structure. It combines this data with a list of the end items and associated spare parts planned to be used in war. For each end item or part, the model uses data on expected end-item use and spare parts consumption rates due to breakage, geography, and environment. Also, the model uses data on rates of equipment loss due to battle damage. The most recent Quarterly Readiness Report to the Congress (October-December 2000) indicates that the current status of the Army’s war reserve parts is of strategic concern. This strategic concern was raised for the first time in the unclassified version of this report, although prior reports’ classified Annexes A have addressed spare parts concerns. The report states that the Army is between 85 and 95 percent filled in its prepositioned equipment, but shortages still exist in spare parts. The report points out that warfighting and functional commanders in chief of the unified commands continue to express strategic concerns over the status of some prepositioned stockpiles of spare parts. However, the report says that the Department of Defense has taken action to address the critical shortfalls in this area. We were told by a Department official that the action referred to is the Army’s planned future funding for war reserve spare parts. The report concludes that forces can execute the National Military Strategy, but the risk caused by parts shortages and other problems to the first war is moderate and to the second remains high. The risk is defined as the likelihood of failing to accomplish theater objectives within planned timelines and means an increase in the potential for higher casualties to U.S. forces. During our review, we found Army documents that provide more information on spare parts shortages. For example, in a May 2000 information paper, the Chief of the Army War Reserve Division in the Office of the Deputy Chief of Staff for Logistics advised the Office of Management and Budget that the planned funding for spare parts would result in moderate risk of not having the needed parts in the first major theater war and greater risk in the second. In addition, an internal Army Materiel Command analysis of war reserve spare parts on hand shows the Army has on hand only about 35 percent of its stated prepositioned war reserve spare parts requirement as of the December 2000 budget stratification report done by the Army Materiel Command, expressed in monetary terms, not number of parts. Another internal document dated November 1999 prepared by the Army War Reserve Division also addressed the availability of spare parts for war reserves. The purpose of this document was to show the requirement and shortfall for war reserve spare parts, based on parts on hand or expected to be available in the future for the Army’s Fiscal Years 2000-2005 Program Objective Memorandum. It indicates that the Army has a stated requirement of $3.3 billion in spare parts needed for two major theater wars. To meet this requirement, the Army calculates that it has $1.3 billion in parts prepositioned or otherwise set aside for war reserve, it has $0.627 billion in on-hand peacetime inventory that could be used to meet its requirement, and it expects to acquire $0.131 billion in parts from the industrial base. This leaves a shortfall of about $1.24 billion. However, the Army expects to get $0.265 billion in future years budget authority through fiscal year 2005 (mostly in the out-years) to help address war reserve spare parts needs. This would still leave a shortfall of about $0.975 billion. Notwithstanding the apparent shortfall in funding for war reserve spare parts, our review found uncertainties about the accuracy of the Army’s requirements in that area. How the Army determines its war reserve spare parts requirements has been a matter of concern within the Department of Defense for several years. After considerable effort to improve the process, the central improvement—using better consumption factors in the requirements calculations— has not been widely implemented. Other issues raise further concerns about the validity of the Army’s stated requirements for war reserve spare parts. They include (1) the potential mismatch between the Army’s methodology for calculating spare parts requirements and the way it intends to maintain and repair equipment on the battlefield, (2) the contributions the industrial base can provide in the way of spare parts support, and (3) the effect of emerging issues such as force structure actions on spare parts requirements. In the 1990s, the Office of the Secretary of Defense expressed concern about the Army’s stated requirements for war reserve spare parts and questioned the determination process used to arrive at those requirements. These concerns were related to the rate at which spare parts would be consumed during wartime. To assuage these concerns, the Army indicated in 1998 that it would change its process for calculating requirements by updating its consumption factors to obtain more realistic information. The change is to replace prior consumption factors that were based on peacetime usage with new factors, referred to as Equipment Usage Profiles and Mean Usage Between Replacement factors, that would better reflect expected usage of parts in wartime. Studies by the Institute for Defense Analyses in 1997 and Coopers & Lybrand in 1998 endorsed the use of the new consumption factors in calculating the requirements. We found that the Army has been slow in implementing this new determination process. To date, about 85 percent of the Army’s stated requirements has not been updated using the new consumption factors. After we brought this condition to the Army’s attention, Army officials in the War Reserve Division of the Office of the Deputy Chief of Staff for Logistics and the Army Materiel Command’s Readiness Division told us that they plan to make all new factors available to those doing the calculations so that the fiscal year 2004 to 2009 Program Objective Memorandum budget package will be based on more accurate data. We found that Army-sponsored studies made in 1997 and 1998 showed that some requirements increased while others decreased when the new consumption factors were tested. For example, the Coopers and Lybrand study sampled various parts requirements and found that aviation parts requirements increased from $78 million to $160 million, while non- aviation parts requirements decreased from $531 million to $218 million. Using a limited analysis for the M1 tank, the Institute for Defense Analyses study found that the parts requirements for this end item decreased by over 50 percent. Until the Army fully incorporates the best consumption factors into its requirements determination process, it cannot ensure that it is not buying the wrong amounts of individual items and consequently failing to adequately supply the spare parts needed for the two major theaters of war scenario. A potential mismatch exists between the results from the Army’s process for determining spare parts requirements for the war reserve and how the Army plans to repair equipment on the battlefield. The Army has specified that war reserve parts requirements calculations are to optimize parts requirements for specified readiness goals at the least cost, based on Department of Defense guidance. What this means in practice is that the Army’s stated requirements include numerous parts to repair components and subassemblies rather than the components and subassemblies themselves. However, the Army’s current maintenance policy calls for fighting units to remove and replace components and subassemblies rather than repair them on the battlefield. The policy of removing and replacing components and subassemblies appears to conflict with the results of the readiness based sparing methodology. After we discussed this apparent inconsistency with Army officials, we were told that the Army is currently evaluating this issue and that it plans to change the next parts requirements calculation to reflect the current maintenance policy. Army officials in the War Reserve Division of the Office of the Deputy Chief of Staff for Logistics and the Army Materiel Command’s Readiness Division could not tell us when this evaluation is to be completed, but they expect the evaluation will change the specific parts and quantities required. Currently, the Army is relying on an internal estimate of what industry might contribute in the way of spare parts needed for two major theater wars, rather than well-defined information from industry. The Army estimates that about 4 percent of the stated spare parts requirement will be derived from the industrial base. This estimate was developed by using generic information on percentages of administrative and production lead times for delivery of parts. According to Army officials, industry data is not being used in developing this estimate because, in the past, few companies responded to the Army’s industry spare parts surveys. The validity of the Army’s estimate of the amount of parts to be available from industry ($131 million of the $3.3 billion total requirement) is open to question. For example, a 1998 Army study raised concerns about whether industry could support certain spare parts requirements. It found that some requirements assumed to be supported by industry could not be and some that were assumed not to be supported by the industrial base were. The study pointed out that of 86 items (valued at $73 million), 44 of them (valued at $51 million) were found not available from the industrial base, although the Army assumed them to be available. The study further indicated that of 218 items (valued at $60 million), 176 (valued at $54 million) were found to have existing industrial base production capacity, although the Army assumed the items would not be available. The Department of Defense’s most recent Annual Industrial Capabilities Report to Congress, dated January 2001, intended to address industrial concerns, does not address the ability of industry to supply Army critical spare parts for a wartime scenario. The contributions the industrial base can provide have a great bearing on what the Army needs to have in its war reserve, but the Army’s assessments of industrial capability are limited to selected weapon systems or major end items, such as the Comanche weapon system. The Army and the other services have expressed concerns about existing shortages of spare parts for current operations, caused, in part, by firms going out of business or being reluctant to recreate a production line to produce parts for aging equipment. Emerging issues associated with (1) the Army’s logistics reform initiatives resulting from its biennial analysis of force requirements known as Total Army Analysis, (2) the Army’s planned transformation to a lighter, more strategically responsive force, and (3) the statutorily mandated Quadrennial Defense Review could significantly change the kinds and numbers of spare parts that will be needed. Because of implementation of technological improvements in battlefield distribution and the fielding of various logistic enablers, the Army, in its most recent Total Army Analysis, estimates a 15-percent reduction in spare parts needed in-theater by 2007. Every 2 years the Army performs its Total Army Analysis to (1) determine the number and types of support forces needed by combat forces and (2) allocate end-strength to these requirements. In the Total Army Analysis, the Army uses a series of models to simulate the two nearly simultaneous major theater wars described in the National Military Strategy. The analysis cites the implementation of technological improvements in battlefield distribution and the fielding of various logistic enablers as the reasons for the possible reduction in spare parts. The Army’s planned transformation to a more strategically responsive force is expected to reduce the number of divisional combat systems by 25 percent and consequently reduce the number of parts needed. In October 1999, the Army announced plans to radically change to a lighter, more strategically responsive force. The Army’s stated vision was to be able to deploy (1) a combat capable brigade in 96 hours, (2) a division in 120 hours, and (3) five divisions in 30 days. The Army plans to validate the capabilities of the first restructured brigade and then take a number of years to complete the entire conversion to a restructured force. Part of this plan is to reduce the number of combat systems from 58 to 45 and personnel by 3,000 in heavy divisions. It also expects its new weapon systems will have a greater commonality of parts. While the conversion will likely require the acquisition of yet to be determined spare parts for war reserves, the greater commonality should reduce the amount of spare parts required in the long term. However, we were also told that the number of parts needed in the shorter term would not necessarily be reduced because there would be both old and new systems in the force during the transition to the new structure. The Quadrennial Defense Review for 2001, as well as the Secretary of Defense’s strategic review, could significantly affect the Army’s war reserve requirements. The statutorily mandated Quadrennial Defense Review is intended to provide a comprehensive examination of such things as potential threats, force structure, readiness posture, military modernization programs, and infrastructure and develop options for key decision-makers. The previous Quadrennial Defense Review addressed such decisions as reducing the number of active duty personnel and fostered plans to reduce the amount of logistic support to be provided. Any changes in the Army’s force structure, its utilization of certain weapon systems, or the National Military Strategy itself would consequently affect the kinds and quantities of spare parts needed in the Army’s war reserve. In part because of the Army’s significant shortfall in meeting its reported war reserve spare parts requirement and its current funding plans, there is some risk associated with executing the two major theater war scenario, assuming requirements have been adequately identified. Because of limitations in the Army’s process for determining war reserve spare parts requirements, uncertainties exist regarding the accuracy of the war reserve spare parts requirements and funding needs. These limitations include (1) not using the best available data on the rate at which spare parts would be consumed during wartime for its war reserve spare parts requirements calculations, (2) having a potential mismatch between the Army’s process for determining spare parts requirements for war reserves and how the Army plans to repair equipment on the battlefield, and (3) lacking a fact-based assessment of industrial base capacity to provide needed parts for the two major theaters of war scenario. Some uncertainties are likely to remain for the foreseeable future as the Army contemplates a significant transformation of its forces and other changes are considered affecting military strategy and force structure. However, improvements in the above areas could lessen the degree of uncertainties that exist. We recommend that the Secretary of Defense assess the priority and level of risk associated with the Army’s plans for addressing the reported shortfall in Army war reserve spare parts. To provide accurate calculations of the Army’s war reserve spare parts requirements, we recommend that the Secretary of Defense direct the Secretary of the Army to promptly develop and use the best available consumption factors (i.e., Equipment Usage Profiles and Mean Usage Between Replacement factors) in calculating all spare parts requirements for the Army’s war reserve; eliminate potential mismatches in how the Army calculates its war reserve spare parts requirements and the Army’s planned battlefield maintenance practices; and develop fact-based estimates of industrial base capacity to provide the needed spare parts in the two major theater war scenario time frames. We further recommend that the Secretary of Defense include in future industrial capabilities reports more comprehensive assessments on industry’s ability to supply critical spare parts for two major theater wars. The Acting Deputy Under Secretary of Defense for Logistics and Materiel Readiness provided written comments to a draft of this report. The Department’s comments are reprinted in appendix I. The Department generally agreed with the report and our recommendations. It agreed that the Army must validate war reserve requirements and prioritize the support for those requirements. It also agreed that developing a strategy for determining industrial base capability was an important step in this process. While the Department outlined actions planned to address these issues, additional actions will be needed to fully address all of the recommendations. The Department concurred with the intent of our recommendation that the Secretary of Defense assess the priority and level of risk associated with the Army’s plans for addressing the reported shortfall in Army war reserve spare parts, but it indicated that it would determine whether an independent assessment is feasible by August 1, 2001. The intent of this recommendation was not to assess the feasibility of an independent assessment but rather to bring increased visibility to the Army’s plans for addressing the reported shortfall in the Army’s war reserves and ensuring secretarial review and concurrence with the Army’s plan considering funding priorities and risk. We continue to believe such a review is needed. The Department concurred with the recommendation we made for improving the accuracy of its calculation of war reserve spare parts requirements. It outlined specific actions and time frames for accomplishing planned actions. It noted that validation of consumption factors important to more precisely identifying requirements would be addressed by a team the Army has established to review the planning data used throughout the Army. The Department also concurred with our recommendation for improving the Army’s assessment of industry’s ability to supply critical spare parts for two major theater wars. It indicated that it will review the need for further industrial base assessments upon completion of an Army Industrial Base Strategy that is expected to be completed December 1, 2001. However, available information indicates that this study is focused on government production and maintenance facilities, not on private industry’s ability to provide spare parts. Accordingly, we believe that additional action will be needed to develop fact-based estimates of the industrial base capacity to provide the needed spare parts in the two major theater war scenario time frames. To ascertain what the Army was reporting about spare parts in its war reserve, we reviewed Quarterly Readiness Reports to the Congress and Joint Monthly Readiness Reports and discussed issues related to spare parts with Army headquarters and U.S. Central Command and U.S. Pacific Command officials. To compare the reported readiness status to the availability of parts to meet requirements for the two major theater war scenario, we obtained Army data on war reserve spare parts on hand compared to the requirements and discussed the results with officials in Army headquarters and the Army Materiel Command. To determine the reliability of the Army’s war reserve spare parts requirements, we reviewed the process and factors used for determining requirements and analyzed data on requirements and on-hand parts from officials of Army headquarters in the Office of the Deputy Chief of Staff for Logistics; the U.S. Army Materiel Command and related agencies, to include the Army Materiel Systems Analysis Agency, the Logistics Support Agency, the Field Support Command of the Operations Support Command, and the Aviation and Missile Command; and the Combat Arms Support Command. We visited the U.S. Central Command and its Army component and met with representatives of the U.S. Pacific Command to discuss the requirements they receive from the Army. We also attended several logistics planning conferences to learn more about how the Army plans to support the fighting commands with parts and other supplies. We performed our review between February 2000 and March 2001 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Honorable Donald H. Rumsfeld, Secretary of Defense; and the Honorable Joseph Westphal, Acting Secretary of the Army. We will also make copies available to others upon request. Please contact me on (202) 512-5581 if you or your staff have any questions concerning this report. Key contributors to this report were Joseph Murray, Leslie Gregor, Paul Gvoth, and Robert Sommer. | According to the current National Military Strategy the United States should be prepared to fight and win two nearly simultaneous wars in different parts of the world. Military policy calls for each of the services to acquire and maintain enough war material inventories to sustain a two-war scenario until the industrial base can resupply our armed forces. Because of limitations in the Army's process for determining war reserve spare parts requirements, however, the accuracy of the war reserve spare parts requirements and funding needs are uncertain. These limitations include (1) not using the best available data on the rate at which spare parts would be consumed during wartime for its war reserve spare parts requirements calculations, (2) having a potential mismatch between the Army's process for determining spare parts requirements for war reserves and how the Army plans to repair equipment on the battlefield, and (3) lacking a fact-based assessment of industrial base capacity to provide needed parts for the two-war scenario. Uncertainties are likely to persist for some time as the Army contemplates a significant transformation of its forces and other changes are considered affecting military strategy and force structure. However, improvements in the above areas could lessen the degree of uncertainties that exist. |
Public safety agencies include the nation’s first responders (such as firefighters, police officers, and ambulance services), 911 call center staff, and a number of local, state, federal, and regional authorities. Communications, often through wireless land mobile radios, are vital to these agencies’ effectiveness and to the safety of their members and the public. Wireless technology requires radio frequency capacity in order to function, and existing wireless technology is designed to work within specified frequency ranges. Interoperability in the context of public safety communications systems refers to the ability of first responders to communicate with whomever they need to (including personnel from a variety of agencies and jurisdictions), when they need to, and when they are authorized to do so. It is important to note that the goal of being able to communicate when necessary and authorized is not the same as being able to communicate with any other individual at any time—a capability that could overwhelm the communications infrastructure and would likely impede effective communication and response time. Different first responder groups each have different professional practices, public safety missions, emergency response procedures, communication protocols, and radio frequencies. These differences have created a variety of obstacles to effective interoperable communications among first responders. Thus, facilitating interoperable communications has been a policy concern of public safety officials for many years. Land mobile radio systems are the primary means of communications among public safety personnel. These systems typically consist of handheld portable radios, mobile radios, base stations, and repeaters. Handheld portable radios are typically carried by public safety personnel and tend to have a limited transmission range. Mobile radios are often located in vehicles and use the vehicle’s power supply and a larger antenna, providing a greater transmission range than handheld portable radios. Base station radios are located in fixed positions, such as public service access points or dispatch centers, and tend to have the most powerful transmitters. A network is required to connect the different base stations to the same communications system. Repeaters are used to increase the effective communications range of handheld portable radios, mobile radios, and base station radios by retransmitting received radio signals. Figure 1 below illustrates the basic components of a land mobile radio system. The transmissions between the elements of a land mobile radio system consist of electromagnetic waves that propagate along designated frequencies of the radio spectrum. Each communications link uniquely occupies a specific frequency or set of frequencies for as long as information is being transmitted. The radio spectrum is a fixed, limited resource that is shared among government and nongovernment entities for many uses in addition to public safety communications, such as television broadcasting, AM/FM radio, and aeronautical radio navigation. Most public safety agencies use their allocated frequencies for voice communications but are increasingly using their portion of the spectrum to support more advanced technologies, such as data, imagery, and video transmissions. The specific frequency bands allocated to public safety agencies are shown in figure 2. Major frequency ranges that are used for public safety communications include the very high frequency (VHF) range and the ultra high frequency (UHF) range. VHF signals travel farther than UHF signals and thus are useful in suburban and rural areas. However, they generally cannot penetrate building walls very well. In contrast, UHF signals are more appropriate for denser urban areas as they penetrate buildings more easily, and it is less critical that the signals be able to propagate for long distances. The frequencies used by federal agencies are managed by the National Telecommunications and Information Administration, while the Federal Communications Commission manages state and local government frequencies. Radio systems are classified as either conventional or “trunked.” Conventional radio systems have dedicated frequencies—also referred to as channels—assigned to individual groups of users. When a user makes a call, other members of the group cannot use the channel until the call is over. In contrast, trunked systems allocate pools of channels for use by multiple individuals. When a call is made by a user on a trunked system, an available channel is automatically selected from the pool of channels, leaving the remaining channels available for others. While trunked systems are more complex and require more infrastructure than conventional systems, they allow for more efficient use of communication channels, reducing congestion. In order to effectively respond to emergencies such as natural disasters and domestic terrorism, public safety agencies need the ability to communicate with their counterparts in other disciplines and jurisdictions. However, the wireless communications systems used by many police officers, firefighters, emergency medical personnel, and other public safety agencies do not provide such capability. For example, emergency agencies responding to events such as the bombing of the federal building in Oklahoma City and the attacks of September 11, 2001, experienced difficulties in trying to communicate with each other. The 9/11 Commission concluded that communications interoperability problems contributed to the large number of firefighter fatalities that occurred at the World Trade Center. Historically, first responder communications interoperability has been significantly hampered by different and incompatible radio systems. Different technologies and configurations, including proprietary designs, by different manufacturers have limited the interoperability of public safety wireless communications systems. These systems have also operated on different frequencies of the radio spectrum. In particular, public safety agencies have been assigned frequencies in new bands over time as available frequencies became congested, and as new technologies made higher frequencies available for use. Existing radios are unable to transmit and receive in all of the public safety frequencies, often making communications between first responders from different jurisdictions difficult. Additionally, as we have previously reported, there is a need for better frequency planning and coordination. Further, public safety agencies have historically planned and acquired communications systems without concern for interoperability, often resulting in multiple, technically incompatible systems in operation throughout any given local jurisdiction. A variety of technical approaches have been adopted to help improve interoperable communications, including the following: Swapping radios: Agencies maintain a cache of extra radios that they can distribute during an emergency to other first responders whose radios are not interoperable with their own. The advantage of this solution is that it does not require that all existing radios be replaced, an important consideration when funds to buy new equipment are limited. However, this approach requires significant logistical support and careful management to implement successfully. Patching: Two or more incompatible radio systems are connected to a central switchboard-like system that translates a signal sent from one connected system so that it can be received by any of the other connected systems. The principal advantage of this solution is that agencies can continue to use existing systems that would otherwise be incompatible. A major disadvantage is that patching requires twice as much spectrum because a patched transmission occupies separate channels on each connected system. Shared channels or mutual aid channels: Agencies agree to set aside a specific channel or channels for connecting to other incompatible systems. This approach provides direct interoperable communications and only occupies one channel per conversation. However, it can cause congestion since these channels require dedicated frequencies and thus have limited capacity. Shared systems: The use of a single or common radio system—typically a trunked system—to provide service to most agencies within a region. Shared systems are the most robust form of interoperability and do not require dedicated channels. While this approach produces optimal performance, it can be very expensive, because it generally requires purchasing all new radios and transmission equipment. Technologies that can help implement shared systems include the following: Internet Protocol based systems: Using the Voice over Internet Protocol, advanced communications systems can offer the flexibility to transmit voice conversations over a data network such as the Internet or a private network. Software-defined radios: These are intended to allow interoperability among agencies using different frequency bands, different operational modes (digital or analog), proprietary systems from different manufacturers, or different modulations (such as AM or FM). However, software-defined radios are still being developed and are not yet available for use by public safety agencies. However, interoperability cannot be achieved solely by implementing technical solutions. Coordination among different agencies and governmental entities is also critical. Response to an emergency may involve all levels of government and many different disciplines, such as law enforcement organizations, fire departments, emergency medical services, transportation, natural resources, and public utility sectors. Each of these agencies is likely to have its own policies, procedures, and communications protocols when responding to an incident. A simplistic example is the word “fire,” which to a firefighter means that something is burning but to a police officer is a command to shoot a weapon. Resolving such cultural and procedural differences can be challenging. Further, the extent to which interoperable communications are needed among different agencies, disciplines, and levels of government (federal, state, local, and tribal) varies based on the size, significance, and duration of an emergency event. Increasing degrees of interoperability may be needed for (1) routine day-to-day coordination between a few agencies in a local area, (2) extended operations involving agencies from multiple jurisdictions working on a larger problem (such as the 2002 sniper attacks in the Washington, D.C., metropolitan area), and (3) a major, large-scale event that requires response from a range of local, state, and federal agencies and disciplines (such as major wildfires, hurricanes, or the terrorist attacks of September 11, 2001). In 2004, we reported that a fundamental barrier to successfully addressing interoperable communications problems for public safety was the lack of effective, collaborative, interdisciplinary, and intergovernmental planning. We recommended that DHS take a number of actions to address this barrier, such as determining the current status of interoperable communications across the nation and encouraging states to establish comprehensive statewide interoperability plans and certify the alignment of their grant applications with their statewide plans. DHS has taken steps to address these recommendations. For example, it recently completed a national survey of first responders to determine the current status of their interoperability capabilities, and it has required states to develop statewide communications plans by the end of 2007. SAFECOM is a DHS program intended to strengthen interoperable public safety communications at all levels of government. The program provides research, development, testing and evaluation, guidance, tools, and templates on communications-related issues. We previously reported that changes in leadership delayed progress during the initial years of the SAFECOM program and that the program suffered from a lack of leadership and focus. Since 2004, SAFECOM has spent $20.4 million developing several tools and providing assistance to help guide states and localities as they work to improve the interoperability of their communication systems. Table 1 outlines several tools and guidance that SAFECOM had developed as of July 2006. The program continues to develop additional tools. We previously recommended that in order to enhance the ability of SAFECOM to improve communications among emergency personnel from federal, state, local, and tribal agencies, SAFECOM officials should complete written agreements with the project’s identified stakeholders (including federal agencies and organizations representing state and local governments) that define the responsibilities and resource commitments that each of those organizations will assume and include specific provisions that measure program performance. Since we made our recommendation, SAFECOM program officials have established a governance charter for the program, which outlines the roles, relationships, and operating guidelines for participating stakeholders. The Office of Grants and Training, which is scheduled to become part of the Federal Emergency Management Agency, is a separate entity within DHS that is responsible for, among other things, providing grants and technical assistance to states and localities to help them improve their interoperable communications. Grants and Training provides funding to states and requires that at least 80 percent of grant funding provided to states through the Homeland Security Grant Program be passed to localities. Grants and Training also provides additional funding to address the unique planning, equipment, training, and exercise needs of UASI areas. DHS uses a partly risk-based approach to allocate grant funds. State agencies submit proposals to DHS which form the basis for its risk-based decisions. During the most recent grant allocation process in 2006 for the Homeland Security Grant Program, each state and territory received a portion of its grant funding through a base allocation. The remainder of funds was allocated based on an analysis of risk and need. In fiscal year 2006, the UASI funds were allocated based on risk and effectiveness. DHS estimated the relative risk of successful terrorist attacks on selected urban areas, considering threat, vulnerability, and consequences for both asset- based and geographic factors. On the basis of this analysis, it ranked the UASI areas and identified 35 urban areas as eligible to apply for UASI funding. In addition, the 11 urban areas that received funding previously, but were not identified as UASI areas in 2006, have been extended eligibility for funding for one additional year. DHS also used a peer review process to assess the effectiveness of each of the 35 urban areas’ proposed investments using the grant funds. Grants and Training has also established a monitoring program in which preparedness officers validate that grant funds are being administered legally and in accordance with the guidance provided to grantees. Preparedness officers work with the states to help address areas of concern, needs, and priorities. The monitoring program is also intended to provide a general assessment of where states and localities are in protecting their citizens. In addition, in efforts to control the use of awards, DHS officials have developed an Approved Equipment List that provides information on allowable equipment expenditures. Further, Grants and Training established the Interoperable Communications Technical Assistance Program, which has provided guidance and technical assistance to the UASI areas. While the program focuses mostly on providing guidance and assistance to these specific areas, assistance is also provided to non-UASI areas. Table 2 provides a list of the assistance and guidance offered by Grants and Training. Another grant program focused on interoperable communications is the Department of Justice’s Community Oriented Policing Services (COPS) Interoperable Communications Grant program. The program awards technology grants to law enforcement agencies for interoperable communications and information sharing. While the program used to have a larger role in providing grant funding to states and localities, its scope and budget was significantly reduced in 2006 in an effort to eliminate overlap with DHS’s grant program. More recently, the 2007 DHS Appropriations Act transferred many SAFECOM program responsibilities to a new Office of Emergency Communications (OEC). This new office, which is not yet operational, is to take over the Interoperable Communications Technical Assistance Program from Grants and Training and the Integrated Wireless Network project, which is intended to create a consolidated federal wireless communications service for federal public safety agencies. This new office is tasked with improving overall emergency communications for first responders, as well as improving interoperability. In addition to the OEC, the Office for Interoperability and Compatibility within the Science and Technology Directorate will continue to house the remaining elements of SAFECOM related to research, development, testing and evaluation, and standards. In 1989, the Association of Public Safety Communications Officials, the National Association of State Telecommunications Directors, and selected federal agencies established Project 25 to develop open standards for vendors to use when designing land mobile radio communications equipment. Project 25 has the following four primary objectives: enable effective inter- and intra-agency communications, improve radio spectrum efficiency, focus equipment and capabilities on public safety needs, and leverage an open architecture to promote competition across land mobile radio vendors. Project 25 standards are intended to be a suite of national standards, based upon public safety user requirements, which define operable and interoperable communications equipment for first responders. When complete, this suite of standards is intended to allow for specifications to be written for interfaces between the various components of a land mobile radio system. The Association of Public Safety Communications Officials, the National Association of State Telecommunications Directors, and federal agency representatives, work with the Telecommunications Industry Association (TIA)—an American National Standards Institute- accredited standards development organization—to develop and maintain the standards. According to DHS, $2.15 billion in grant funding was awarded to states and localities from fiscal year 2003 through fiscal year 2005 for communications interoperability enhancements. This funding, along with technical assistance, has helped to make improvements on a variety of specific interoperability projects. However, in the states we reviewed, strategic planning has generally not been used to guide investments and provide assistance to improve communications interoperability on a broader level. Specifically, not all states had plans in place to guide their investments toward long-term interoperability gains; no national plan was in place to coordinate investments across states; and while UASI officials stated that the technical assistance offered to them had been helpful, DHS curtailed full-scale exercises, limiting their value in measuring progress. Further, although DHS has required states to implement statewide plans by the end of 2007, no process has been established for ensuring that states’ grant requests are consistent with their statewide plans. Until DHS takes a more strategic approach to improving interoperable communications—such as including in its decision making an assessment of how grant requests align with statewide communications plans—and until more rigorous exercises are conducted, progress by states and localities in improving interoperability is likely to be impeded. One of the main purposes of the DHS grants program is to provide financial assistance to states and localities to help them fund projects to develop and implement interoperable communications systems. In addition, as previously mentioned, the Interoperable Communications Technical Assistance Program is intended to provide on-site assistance to UASI areas to, among other things, assist with developing tactical interoperability plans, planning exercises, assessing communication gaps, and designing interoperable systems. The four states we reviewed received assistance from DHS, which helped make improvements on specific interoperability projects. Florida: Florida has spent $36.5 million in DHS funds to develop a system called the Florida Interoperability Network, which establishes network connections between federal, state, and local dispatch centers across Florida and provides mutual aid channels throughout the state. As a result, the level of interoperability across the state has improved significantly. First responders in 64 of Florida’s 67 counties are now able to have their communications patched to each other as needed via the network. Previously, they had no such infrastructure for achieving interoperability. However, officials from localities in Florida raised questions about the long-term sustainability of the network. Each connected jurisdiction must pay the ongoing costs of their connection to the Florida Interoperability Network, and smaller jurisdictions are likely to find this unaffordable. Further, Florida officials remarked that training across the state is still incomplete. Additionally, in the Miami UASI region, a majority of the Urban Area Security Initiative funding for interoperable communications has been used to acquire communications equipment, such as radios, and interoperability solutions, such as devices that interconnect first responders on disparate radios, to make improvements in Miami City and in Miami-Dade County. However, limited UASI funding had been dedicated to making interoperability improvements in other localities in the Miami UASI, such as Monroe and Broward Counties. Kentucky: Kentucky used a portion of its DHS funding to expand the use of mutual aid interoperability radio channels that allow agencies on different communication systems throughout Kentucky to tune to a dedicated, shared frequency to communicate. Prior to this initiative, first responders operating on different frequencies were unable to communicate. Currently, approximately 34 percent of applicable agencies have signed a memorandum of understanding to commit to using the mutual aid channels in accordance to standardized procedures. However, mutual aid channels have limited capacity, and Kentucky has yet to implement a long-term solution for a statewide voice communications system that will allow federal, state, and local first responders to communicate directly as needed. Kentucky has also used DHS funding to implement a statewide wireless data communications system. The system provides functionality such as statewide records management, real-time crime coverage and data collection, and instant messaging. First responders use mobile data terminals to communicate with each other and, in many cases, retrieve information from agency databases. Kentucky’s mobile data network currently has coverage across approximately 95 percent of the state’s primary and secondary road systems. Such capabilities were not available to Kentucky’s first responders prior to this initiative. In the Louisville UASI, local officials have utilized DHS funding to implement patching mechanisms to connect different communication systems throughout the region. However, according to officials, communications channels are frequently congested because of the amount of patching that needs to be done to connect responders. New York: In New York, DHS funding is generally being utilized by localities to address local interoperability issues within their counties and with neighboring counties. For example, Albany County is acquiring a new interoperable system that connects first responders on many disparate systems within Albany County and neighboring counties. Prior to this system, there was no single voice system or network that would allow incident commanders and first responders to be able to communicate directly. However, the local solutions do not always incorporate state and federal systems. For example, the state is using state funds to develop and implement a separate and incompatible statewide network called the Statewide Wireless Network, which localities are not required to join. Albany County, for example, has no immediate plans to connect their new system to the statewide system because of uncertainties about the expense and the expected benefits for the county. In the New York City UASI, local officials have used a portion of DHS funding to implement a citywide mobile wireless network. This system is intended to provide first responders throughout the city with high-speed data access to support large file transfers, including accessing federal and state anticrime and antiterrorism databases, fingerprints, mug shots, city maps, and full- motion streaming video. Oregon: Oregon, in accordance with DHS guidance, has dedicated most of its DHS funding to local projects that improve interoperability in specific regions. For example, Jackson and Josephine Counties are jointly implementing an interoperable communications system. Previously, first responders in these two neighboring counties relied on indirect means for establishing interoperable communications, such as radio channels, patching mechanisms, and a mobile command vehicle equipped with a cache of radios in different frequencies and a patching device that could be deployed as needed. However, this new system does not include federal or state first responders. In addition, limited DHS funding has been utilized for developing plans for the development of the Oregon Wireless Interoperability Network. This system is intended to replace state agencies’ deteriorating systems with a new system. It is also intended to connect local agencies that continue to use their existing systems to other local agencies that they do not already have interoperability with. To date, the development of this system has not been initiated. In the Portland UASI, DHS funding was used to install repeaters in Columbia County to enhance interoperability with the other four counties in the urban area. However, while it has improved the interoperability, not all Columbia County first responders are able to utilize this solution. Therefore, the UASI funding was also used to purchase a supply of reserve radios— referred to as a cache—that can be shared. Table 3 shows the amount of DHS funding states and localities have received and examples of what the money has been used for. According to SAFECOM guidance, interoperability cannot be solved by any one entity alone and, therefore, an effective and interoperable communications system requires a clear and compelling statewide strategy focused on increasing public safety effectiveness and coordination across all related organizations. A statewide interoperability plan is essential for outlining such a strategy. Such a plan should establish long-term objectives but also include short-term solutions that help incrementally achieve sustainable solutions to the long-term objectives. Thus, establishing long-term plans helps ensure that near-term solutions are consistent with the end goal. The narrow and specific use of DHS funding in the states we reviewed can be traced in part to the lack of statewide plans; interoperability investments by individual localities have not been coordinated toward achieving a broader goal for the state. For example, Kentucky, which has received grant funding totaling approximately $50 million since fiscal year 2003 according to DHS, has not yet developed a statewide communications plan, although in January 2007, officials stated that they had begun developing a plan. While Kentucky has recently begun working to assess how best to address statewide needs, to date, grant reviewers at the state level who are in charge of disbursing DHS grant money to localities have had limited means for determining whether funding requests for equipment and training were compatible with statewide interoperability goals. For example, evaluators were required to assess aspects of request proposals such as whether they fully addressed the measurable objectives expected for a new wireless communication system and whether they addressed how the applicant agency would communicate with other public safety and/or public service organizations at the local, state, and federal levels. However, the available criteria do not provide the evaluators with an overall statewide strategy that the evaluators could use to assess whether the localities’ proposal is aligned to it. As a result, the equipment and activities that localities have purchased have tended to address short-term voice communication solutions for local interoperability problems while long-term, statewide solutions have not been addressed. However, as previously stated, Kentucky has developed a data communications network to supplement gaps in its voice communications. Similarly, New York does not yet have a statewide communications plan and, therefore, does not utilize DHS grant funding in support of such a plan. While state officials recommend that localities invest in interoperable communications, they provide no additional guidance to localities to ensure that local investments are consistent with statewide goals. As a result, localities have generally used the funding to address local interoperability issues within their counties and neighboring counties, with little regard for state and federal systems. For example, while New York is currently in the process of deploying the Statewide Wireless Network for $2 billion, localities are not required to participate, and local interest in the statewide system has been limited. As a result, localities are continuing to develop their own interoperability solutions that do not incorporate the network. Among localities we reviewed, Onondaga County is implementing its own $33 million interoperable communications system independently of the network, and Albany County, likewise, is currently developing a $1.7 million interoperability system that does not incorporate the Statewide Wireless Network. Officials stated that once the network’s pilot period is complete they will decide whether to participate in the network. In accordance with a previous recommendation, DHS has required grant recipients to develop and adopt a statewide communications plan by the end of 2007. Additionally, the 2007 DHS Appropriations Act states that DHS may restrict funding to a state if it does not submit a statewide interoperable communication plan. However, despite our other previous recommendation that DHS should require that states certify that grant applications be consistent with statewide plans, no process has yet been established for ensuring that states’ grant requests are consistent with their statewide plans and long-term objectives for improving interoperability. Grants and Training officials are considering instituting such a process but they do not have specific plans to do so. Because of the lack of coordination, state and local governments are investing significant resources, including DHS grant funds, in developing independent interoperability solutions that do not always support each others’ needs. Until the DHS-mandated statewide communications plans are in place, and processes have been established for ensuring that each state’s grant request is consistent with its statewide plan and longer-term interoperability goals, progress by states and localities in improving interoperability is likely to be impeded. In addition to statewide plans, an overarching national plan is critical to coordinating interoperability spending, especially where federal first responders are involved. According to the Public Safety Wireless Advisory Committee, improving interoperable communications across the nation will require a national plan that includes all levels of government and defines operational policies and procedures and the proper use of national communications resources. In responding to large-scale events—such as wildfires, hurricanes, or terrorist attacks—state and local government first responders require interoperable communications with federal agencies. To date, however, interoperability investments have tended to be isolated and piecemeal, in part because they have not been guided by a comprehensive national plan. For example, officials stated that Oregon and its bordering states—Washington, California, and Idaho—are each working independently to try to implement and meet federal communication requirements and improve interoperability. In a large-scale emergency, where first responders may need to coordinate with agencies from other states and a variety of federal agencies, the lack of national-level planning can result in substantial interoperability problems. During Hurricane Katrina, for example, Florida first responders spent half a day trying to contact their counterparts in Louisiana and Mississippi in an effort to share communications equipment. If these states coordinated prior to the catastrophe, it is likely that less time and energy would have been wasted. The lack of a national strategy has also left state officials uncertain about whether they are taking appropriate steps to plan for interoperability. For example, Oregon officials indicated they are uncertain whether the approach they are taking is the best way to solve their interoperability problems. The 2007 DHS Appropriations Act requires DHS to develop a National Emergency Communications Plan by March 2008. Among other things, the plan is to identify necessary emergency communications capabilities for first responders and government officials, identify obstacles to interoperable communications, provide both short-term and long-term solutions to those obstacles, and establish goals and time frames for the deployment of emergency communications systems based on new and existing equipment across the United States. According to state and local officials, the Interoperable Communications Technical Assistance Program has been beneficial to each of the four UASI areas we visited. For example, according to Miami officials, the program provided extensive support in the development of the tactical interoperable communications plan for the Miami area. Technical assistance representatives held meetings with each of the Miami area public safety agencies to compile a regional communications equipment inventory. Similarly, according to Louisville officials, the Interoperable Communications Technical Assistance Program held a 2-day workshop on developing the tactical interoperable communications plan for the Louisville area. Officials stated that this workshop represented the first time that all relevant communications officials and emergency responders were involved in a collaborative effort. Guidance for the 2006 Homeland Security Grant Program required each of the high-risk UASI areas to plan and conduct a full-scale exercise to validate the effectiveness of their tactical interoperable communications plans. Full-scale exercises are the most complex type of exercises, involving multiple agencies and jurisdictions in testing plans, policies, and procedures. They are intended to be conducted in a real-time, stressful environment that closely mirrors real events. DHS required the exercises as a way to measure the progress each UASI has made in improving interoperability and developed “scorecards” to capture the results of the exercise. However, while DHS provided extensive assistance to the urban areas in developing their tactical interoperability communications plans, it curtailed the exercises that were intended to validate the robustness and completeness of each plan. Due to the complexity of these exercises, the UASI areas were originally allotted 12 months to plan and execute robust, full-scale exercises; DHS subsequently reduced this to 5 months. DHS officials indicated that they accelerated the deadline so that they could use the results as inputs into the interoperability scorecards that they published in January 2007. To compensate for the reduced time frame, DHS reduced the requirements of the full-scale exercise, advising the UASI areas to limit the scope and size of their activities. In reducing the scope of their exercises, the UASI areas had to reduce the extent to which they tested the robustness and effectiveness of their interoperability plans. For example, of the four UASI areas we visited, Portland, Miami, and New York City each reduced the scope of their exercise so they could meet DHS’s accelerated deadline. For example, Portland had to significantly reduce the number of participants from each of the counties participating in the exercise. According to Portland officials, their exercise was not realistic for responding to a real-world incident. Likewise, New York City officials stated that they would have executed a higher quality exercise if DHS had not reduced the time frame. Moreover, according to the 2007 grant guidance, the UASI areas are not required to conduct any additional exercises to further validate their plans. Without robust exercises to validate tactical interoperability communications plans, the UASI areas can only have limited confidence in the plans’ effectiveness, and thus the value of DHS’s efforts may continue to be limited. Similarly, the constraints placed on the exercises means that DHS’s scorecards of each of the UASI areas are based on questionable data. Although initiated in 2001, the SAFECOM program has made limited progress in improving communications interoperability at all levels of government. The program has not addressed state and local interoperability with federal agencies, a critical element to interoperable communications that is required by the Intelligence Reform and Terrorism Prevention Act of 2004. Further, while the program has focused on helping states and localities improve interoperable communications by developing tools and guidance for their use, SAFECOM’s progress in this area has been limited in the selected states. Specifically, officials from selected states and localities often found that the tools and planning assistance provided by the program were not helpful, or they were unaware of what assistance the program had to offer. The program’s limited effectiveness can be linked to poor program management practices, including the lack of a plan for improving interoperability across all levels of government and inadequate performance measures that would provide feedback to better attune tools and assistance with first responder needs. Until SAFECOM adopts these key management practices, its progress is likely to remain limited. When SAFECOM was established in 2001, as one of the Office of Management and Budget’s 25 electronic government initiatives under the management of the Department of the Treasury, its goals were to (1) achieve federal-to-federal interoperability throughout the nation, (2) achieve federal to state/local interoperability, and (3) achieve state/local interoperability throughout the nation. Like the other e-government initiatives, the program was expected to achieve its goals within 18 to 24 months. As we reported in 2004, these are challenging tasks that will take many years to fully accomplish, and the program had made very limited progress at the time of our review. Since 2001, the management and goals of the program have changed several times. Most recently, in 2003, the SAFECOM program was transferred to the Office of Interoperability and Compatibility within the Directorate of Science and Technology in DHS. Its goals included increasing interoperable communications capacity of local, tribal, and state public safety agencies, and increasing the number of states that have initiated or completed statewide plans. Program officials now estimate that a minimum level of interoperability will not occur until 2008, and it is unknown when full interoperability will occur. In addition, the Intelligence Reform and Terrorism Prevention Act of 2004 required DHS to establish a program to enhance public safety interoperable communications at all levels of government, including federal, as well as state and local governments. SAFECOM has been designated as the program responsible for carrying out this requirement. While SAFECOM is required to improve interoperable communications at all levels of government, the objectives that the program has been working toward do not include improving interoperability between federal agencies and state and local agencies. For example, when conducting their baseline national survey of first responders to determine the current level of interoperability, program officials included state and local officials, but not federal officials. The survey included an extensive list of questions in which respondents were asked to rate interoperability (1) with other disciplines, (2) with other jurisdictions, and (3) between state and local governments. Respondents were also asked at the end of the survey to list federal agencies they interoperate with; however, no effort was made to gauge the level of interoperability with the federal government, as had been done for other disciplines and jurisdictions and between state and local governments. As a result, SAFECOM has not addressed a variety of problems involving interoperability between federal and state and local agencies. According to first responders, these difficulties arise when trying to establish interoperable communication between federal and state and local agencies: Uncoordinated interoperability investments. The Departments of Justice, Homeland Security, and Treasury are developing the Integrated Wireless Network (IWN) to create a consolidated federal wireless communications service for federal public safety and law enforcement agencies. The level of interoperability that state and local first responders will have with federal first responders on this network is unknown. Frequency incompatibilities. The National Telecommunications and Information Administration, which manages frequencies used by federal agencies, and the Federal Communications Commission, which manages frequencies used by state and local governments, have established conflicting time frames for when federal agencies and state and local agencies need to implement narrowband systems. Further, according to an Associate Chief of DHS’s Office of Border Patrol, when federal communications networks are configured to narrowband, federal agencies could have difficulty interoperating with local wideband systems unless special radios are procured that can operate both on the wideband and narrowband systems. Use of encryption. Federal agencies, such as the Federal Bureau of Investigation (FBI), use encryption to secure their radio communications. Encryption can be vitally important in preserving the safety and security of their officers. However, they have not developed procedures for sharing the keys to decrypt the communication with state or local first responders in order to be able to communicate with them. Unclear coordination procedures. There is uncertainty within the first responder community regarding the allowable level of coordination and collaboration between federal agencies and state and local agencies. For example, while the National Telecommunications and Information Administration eliminated its requirement that state and local officials obtain written permission to use federal frequencies in May 2006, FBI officials that we interviewed were unaware that they were allowed to share their frequencies without written permission. In lieu of having communications systems that enable direct interoperability between federal first responders and state and local first responders, first responders have resorted to alternative means of communicating. For example, state or local agencies may loan radios to federal first responders or physically pair a federal first responder with a state or local responder so they can share information and relay it back to their agencies. While approaches such as these may be effective in certain situations, they reflect a general lack of planning for communications interoperability. In many cases, using “work-arounds” such as these could reduce the efficiency and effectiveness of the overall public safety response to an incident. SAFECOM officials stated that the program’s focus has been on state and local agencies because they consider them to be a higher priority. Further, while they stated that it would be possible for federal agencies to make use of some of the planning tools being developed primarily for state and local agencies, SAFECOM has not developed any tools that directly address interoperability with federal agencies. However, interoperability with federal first responders remains an important element in achieving nationwide interoperability and is part of SAFECOM’s tasking under the Intelligence Reform and Terrorism Prevention Act of 2004. Until a federal coordinating entity such as SAFECOM makes a concerted effort to promote federal interoperability with other governmental entities, overall progress in improving communications interoperability will remain limited. In addition to supporting development of the Project 25 suite of interoperability standards (discussed in a later section of this report), SAFECOM’s activities have focused primarily on providing planning tools to state and local governments. However, based on our review of four states and selected localities, SAFECOM’s progress in achieving its goals of helping these states and localities improve interoperable communications has been limited. Several state and local officials did not find the tools and guidance useful. For example, of the 10 locations we visited that were aware of the tools and guidance, 6 had not used the programs’ Statement of Requirements or its Public Safety Architecture Framework. Additionally, 3 of the 4 states we reviewed had not used its Statewide Communication Interoperability Planning Methodology to develop a statewide communication plan. Further, officials from 4 of the 15 jurisdictions we reviewed were unaware that the SAFECOM program existed or that it provided interoperability guidance. SAFECOM’s Interoperability Continuum was the most widely used and recognized of its tools. Seven of the 15 states and localities we visited indicated that they used the continuum to assess their interoperability status and plan improvements. Another initiative that had a significant impact was the Regional Communications Interoperability Pilot. Officials from Kentucky—one of the two states that participated in the pilot— indicated that the pilot was very helpful in facilitating communications planning by identifying relevant stakeholders and bringing those stakeholders together for extended discussions about interoperability. And in Nevada, this program resulted in documentation of suggested near- term and long-term goals for improving interoperability. However, the SAFECOM tools that were not widely used represent a significant investment of resources by DHS. For example, program officials said that they spent $9.2 million developing the Statement of Requirements and $2.7 million developing the Public Safety Architecture Framework. State and local officials provided the following reasons for the limited utilization of SAFECOM tools: The tools and guidance are too abstract and do not provide practical implementation guidance on specific issues. For instance, the Statement of Requirements focuses on functional requirements based on textbook definitions of a variety of interoperable communication subjects, such as public safety communication needs, public safety roles and functions, and the levels of operability and interoperability for each major public safety discipline. SAFECOM officials indicated that the Statement of Requirements was meant to be a forward-looking document unconstrained by the limitations of current technology. However, states and localities must work to improve interoperability with technology that is currently available, and the Statement of Requirements does not describe specific technologies, infrastructure, or business models that state and local agencies can refer to when making key decisions regarding improvements to their communication systems. Additionally, neither the Statement of Requirements nor the Public Safety Architecture Framework identifies specific actions a state or local agency can take to make improvements. The documents are lengthy and hard to use as reference tools. For example, the two published volumes of the Public Safety Architecture Framework are approximately 270 pages combined and contain complex information about topics such as the elements and subelements of communication systems and their relationships to each other and to the environment. Officials indicated that they do not have the time to examine and analyze long reports that they believed contained limited useful information. According to SAFECOM officials, they plan to address this concern by publishing a third volume to guide public safety agency officials through the process of developing a communications system architecture. However, even with additional guidance, the framework will remain lengthy and complex. Awareness of SAFECOM and its tools has not reached all state and local agencies. Program officials indicated that they take steps to try to reach out to the broad first responder community, such as by publishing articles in major police and fire periodicals, presenting at events covering communications interoperability, and publishing a quarterly newsletter on interoperability issues called Interoperability Today. However, despite these efforts, several localities that we visited were completely unfamiliar with the program and/or the assistance it provides. Figure 3 identifies which of SAFECOM’s tools, guidance, or other assistance were used by officials at the locations we visited. Tools Applicable to All Regions Recently, SAFECOM has focused more on specific implementation issues, creating tools such as a writing guide for developing memorandums of understanding that could be used to establish agreements on the sharing of communication systems across agencies and jurisdictions. Officials have also developed a guide for writing standard operating procedures, which could be used to prepare written guidelines for incident response. Because these tools were still new, we did not receive assessments of them from state and local officials. One factor contributing to the limited impact that SAFECOM has had on improving communications interoperability is that its activities have not been guided by a program plan. A program plan is a critical tool to ensure a program meets its goals and responsibilities. Such a tool is used to align planned activities with program goals and objectives, as well as define how progress in meeting the goals will be measured, compared, and validated. For example, a program plan could be a useful tool for ensuring that key program goals—such as promoting interoperability across all levels of government including federal responders—are being addressed. In addition, a program plan would provide the structure to help plan tools and guidance that would address the greatest needs. Further, a program plan could be used to delineate performance measures, which are essential to determining the effectiveness of a program and for identifying the areas of a program that need additional attention. Rather than using a program plan to guide their activities, SAFECOM officials stated that they develop tools and guidance based on a list of suggestions obtained from first responders. The SAFECOM Executive Committee—a steering group comprised of public safety officials from across the country—prioritizes the list of suggestions, but this prioritization has not been used to develop a plan. Instead, program officials have made ad hoc decisions regarding which suggestions to implement based on executive committee input, as well as the difficulty of implementation. While this approach incorporates a degree of prioritization from first responders, it does not provide the structure and traceability of a program plan. Program officials have established six performance measures to assess progress, including the percentage of fire, emergency medical services, and law enforcement organizations that have established informal interoperability agreements with other public safety organizations; the percentage of public safety agencies that report using interoperability to some degree in their operations; the percentage of states that have completed statewide interoperability plans; the percentage of grant programs for public safety communications that include SAFECOM guidance; and the amount of reduction in the cycle time for national interoperability standards development. However, several key aspects of the program are not being measured. For example, one of the program’s goals is to increase the development and adoption of standards. However, the only associated performance measure is reduction in the cycle time for national interoperability standards development—not the extent to which adoption of standards has increased or whether interoperability is being facilitated. Also, in assessing the growth of interoperable communications capacity at local, tribal, and state public safety agencies, SAFECOM’s measures—the percentage of states that have established informal interoperability agreements with other public safety organizations and the percentage of public safety agencies that report using interoperability to some degree in their operations —addresses only two of the five areas that SAFECOM has defined as key to improving interoperability (it does not assess improvements made in governance, technology, or training). Moreover, none of the program’s measures assess the extent to which the first responder community finds the tools and assistance helpful or the effectiveness of program outreach initiatives. Consequently, measures of the effectiveness of the program and areas for improvement are not being collected and are not driving improvements in the program, contributing to its limited impact. According to SAFECOM officials, by mid-2007, they plan to establish a measure to assess customer satisfaction. Until DHS develops and implements a program plan that includes goals focusing on improving interoperability among all levels of government, establishes performances measures that determine if key aspects of the SAFECOM program are being achieved, and assesses the extent to which the first responder community finds the tools and assistance helpful, the impact of its efforts to improve interoperable communications among federal, state, and local agencies will likely remain limited. Until recently, little progress had been made in developing Project 25 standards—a suite of national standards that are intended to enable interoperability among the communications products of different vendors. Although one of the eight major subsets of standards was defined in the project’s first 4 years (from 1989 to 1993), from 1993 through 2005, no additional standards were completed that could be used by a vendor to develop elements of a Project 25 compliant system. To its credit, over the past 2 years, the private-sector coordinating body responsible for Project 25 has defined specifications for three additional subsets of standards. However, ambiguities in the published standards have led to incompatibilities among products made by different vendors, and no compliance testing has been conducted to ensure vendors’ products are interoperable. Nevertheless, DHS has strongly encouraged state and local agencies to use grant funding to purchase Project 25 radios, which are substantially more expensive than non-Project 25 radios. As a result, states and local agencies have purchased fewer, more expensive radios, which still may not be interoperable and thus may provide them with minimal additional benefits. Until DHS modifies its grant guidance to provide more flexibility in purchasing communications equipment, states and localities that purchase Project 25 equipment cannot be assured that their investments are likely to result in meaningful gains in interoperability. Initial development of Project 25 began over 15 years ago. It took 4 years, from 1989 to 1993, to develop the standards that comprised the first of eight interfaces, known as the common air interface. The common air interface is one of the most critical elements of Project 25, and, therefore, efforts to develop standards for this interface were initiated first. However, from 1993 through 2005, no additional standards were developed that could be used by a vendor to develop additional elements of a Project 25 compliant system. Concerned about the slow development of Project 25 standards, the conference committee on the Consolidated Appropriations Act for fiscal year 2005, encouraged NIST and the Department of Justice to work with SAFECOM to consider the issuance of interim standards for interoperable communication systems. According to NIST officials, they, along with their federal partners, have established a process for developing interim standards and plan to institute it if progress in the development of Project 25 standards is not sufficiently accelerated. Industry representatives and public safety practitioners responded to these events by increasing the pace and scope of their standards development activities. As a result of their efforts, in the past 2 years, significant progress has been made in defining three additional critical interfaces: the fixed station subsystem interface, the console subsystem interface, and the inter-RF subsystem interface. NIST officials indicated that the focus has been on these interfaces because they will add significant functionality to the overall set of Project 25 standards. Table 4 shows the progress that has been made on each of the eight Project 25 interfaces as of August 2006. Figure 4 shows the relationships among these interfaces. There are a number of obstacles hindering effective implementation of first responder communications systems based on Project 25 standards: Standards are incomplete or not well-defined: NIST officials have stated that key standards that have been defined for several of the eight interfaces have not been adequately specified, allowing vendors to develop products based on inconsistent interpretations. For example, Project 25 manufacturers have determined that the specifications for the conventional and trunked mode operations of the common air interface— which is considered to be the most mature of the eight interfaces—were vague and led to inconsistent interpretations. More specifically, between 2003 and 2005, NIST conducted interoperability tests on the conventional operations mode of six different manufacturers’ radios and found that none of them passed all aspects of the tests. In addition, according to NIST officials, in 2005, a manufacturer conducted interoperability tests on the trunked operations mode of three manufacturers’ radios and also found that none of them passed the tests. More recently, in 2006, a manufacturer conducting interoperability tests found improvements in the consistency of other manufacturers’ interpretations. However, according to NIST officials, ambiguities still need to be resolved in this interface. Additionally, many options available on radios are not specified in the standards, allowing vendors to address these capabilities with unique or proprietary technologies, which can cause interoperability problems. As a result, while recent tests have shown improvements, vendors have developed incompatible, proprietary products rather than interoperable, standards-based products. Lack of compliance testing has limited product interoperability: According to NIST officials, formal peer-review testing is necessary to ensure compliance with standards and interoperability among products. We have previously reported that independent testing and evaluation of commercial products and accreditation of the laboratories that perform the test and evaluations can give agencies increased assurance that the products will perform as vendors claim. However, since 1995, Project 25 radios have been marketed to and purchased by federal, state, and local agencies without any formal compliance testing to validate vendors’ claims of compliance with the Project 25 standards. As a result, recent testing has shown that products labeled “Project 25 compliant” do not necessarily interoperate. State and local agencies do not know how to select Project 25 products: With no formal compliance testing for Project 25 products, state and local agencies have limited means to determine if the products they purchase are compliant with the standards. Therefore, in absence of any other information, agencies have relied on information provided by vendors. Further, vendor products have many different levels of functionality, and agency officials may not understand their specific needs well enough to purchase equipment tailored to their specific requirements that does not include costly functionality that they do not need. However, comparative information about product functionality and typical first responder requirements is not currently in a centralized location, making it difficult for officials to be able to judge which products are most appropriate for their agency’s needs. For example, according to one manufacturer, public works agencies and schools would likely need radios with less functionality, while firefighters would likely need a midrange radio with more features, and a command center or federal law enforcement agency might need the most expensive radios with the greatest number of features. Because of the complexity of product options, agencies may not always be making well-informed decisions on the purchase of radios. Complete Project 25 systems can be prohibitively expensive: Project 25 radios are significantly more expensive than conventional analog radios, and while state and local agencies are paying two to three times more for Project 25 radios, they are not always able to take advantage of the intended interoperability benefits because they cannot afford to procure complete systems. Project 25 radios for first responders can range in price from about $1,000 to about $5,000. Most Project 25 radios used by first responders cost around $2,500. According to officials, a conventional analog radio suitable for first responder work generally costs about two to three times less than Project 25 radios. Benefits of using Project 25 radios, such as interoperability among multiple vendors’ equipment, cannot be fully realized until a complete Project 25 system (base stations, repeaters, and radios) is implemented. Fully replacing an existing radio system with a Project 25 system is very expensive. For example, Arlington County, Virginia—a relatively small county—is acquiring and implementing a full Project 25 environment for $16.8 million. Many localities do not have the funding to make such a large investment. Nevertheless, since 2003, DHS has strongly encouraged state and local agencies to use grant funding from the agency to purchase Project 25 compliant equipment. DHS grant guidance—which was developed by SAFECOM—states that all new voice system purchases should be compatible with the Project 25 suite of standards to ensure that equipment or systems are capable of interoperating with other public safety land mobile equipment or systems. If a grant applicant is interested in purchasing non-Project 25 compliant equipment, the applicant must demonstrate in its application that the system or equipment being proposed will lead to enhanced interoperability. While states and localities have purchased Project 25 radios at the direction of DHS, there is little indication that these radios have enhanced interoperability. Most jurisdictions we visited were not using the Project 25 capabilities, such as interoperating with different vendors’ radios, since they had not fully replaced their existing radio communications infrastructure with a complete Project 25 system. Specifically, of the 11 localities we visited, 8 were buying Project 25 radios and, of these, 7 were not using the Project 25 capabilities of the radios. Thus, as a result of the DHS requirement to buy Project 25 equipment, agencies have purchased fewer, more expensive radios with little or no additional benefit to date. Table 5 shows a sample of spending by localities on Project 25 radios and their use of the Project 25 capabilities. To address the lack of well-defined standards, users and manufacturers have been revising the standards for the conventional and trunked mode operations of the common air interface to clarify ambiguities. To address the issue of a lack of formal compliance testing, SAFECOM, NIST, and the Project 25 steering committee, began developing a peer compliance assessment program for Project 25 products in April 2005. This compliance assessment program is to use various vendors’ approved laboratories to test Project 25 systems through a set of agreed-upon tests that will validate that the systems from various vendors can successfully interoperate and meet conformance and performance requirements. According to NIST, the vendors will be expected to conduct the tests in compliance with a handbook on general testing procedures and requirements, which NIST is preparing to publish. The assessment program is to be implemented in three phases, as described in table 6. Additionally, SAFECOM has issued guidance to supplement the 2007 DHS grant guidance stating that, beginning in fiscal year 2007, grant recipients purchasing Project 25 equipment must obtain documented evidence from the manufacturer that the equipment has been tested and passed all available compliance assessment test procedures for performance, conformance, and interoperability. The guidance also specifies the aspects of Project 25 equipment that are available for testing and that should be tested before a public safety agency acquires the equipment. However, as of January 2007, only limited aspects of the common air interface had been defined fully enough to conduct interoperability tests. Further, NIST’s testing procedures handbook was not yet complete and thus vendors were unable to conduct testing. According to NIST officials, it has not been determined when the full set of conformance, performance, and interoperability tests for the common air interface will be available. NIST and SAFECOM are also working on ways to help agencies make informed decisions when purchasing Project 25 radios to help them acquire features that are Project 25 compliant. Specifically, NIST and SAFECOM have developed a decision tree to help guide officials in selecting the appropriate Project 25 capabilities. NIST has also helped to develop a new process for posting test results online so that potential buyers can have ready access to this information. While efforts are under way to address several of these issues, others remain. Specifically, DHS continues to strongly encourage state and local agencies to purchase Project 25 compliant equipment even though compliance testing is not yet available. Without flexibility to address their needs with equipment that is the most effective, economical, and meets defined interoperability requirements aligned with a statewide plan, states and localities that purchase Project 25 equipment cannot be assured that their investments are likely to result in meaningful gains in interoperability. DHS grants, along with its technical assistance, have helped to make improvements on a variety of specific interoperability projects. However, in selected states, strategic planning has generally not been used to guide investments or provide assistance to improve communications interoperability across all levels of government. Specifically, not all states had plans in place to guide their investments toward long-term interoperability gains; no national plan was in place to coordinate investments across states; and while UASI officials stated that the technical assistance offered to them had been helpful, DHS curtailed full- scale exercises, limiting their value in measuring progress. Until DHS takes a more strategic approach to improving interoperable communications, such as including in its decision making an assessment of how grant requests align with statewide communications plans, and conducts a thorough assessment to identify strategies to mitigate obstacles between federal agencies and state and local agencies, states and localities are likely to make limited progress in improving interoperability. Additionally, until DHS plans another round of full-scale exercises that provide UASI areas with sufficient planning time, the robustness and effectiveness of UASI plans will be limited. The SAFECOM program has had a limited impact on improving communications interoperability among federal, state, and local agencies. The program’s limited effectiveness can be linked to poor program management practices, such as the lack of a plan for improving interoperability across all levels of government, and inadequate performance measures to fully gauge the effectiveness of its tools and assistance. The recent establishment of the OEC creates an opportunity for DHS to improve program management practices among formerly separate component organizations, including SAFECOM. Without a program plan for SAFECOM and other OEC interoperability programs that specifically addresses improvements to interoperable communications from federal to state and local agencies, and includes measures to assess the usefulness of its efforts, the effectiveness of the program is likely to remain limited. While development of a comprehensive suite of standards such as Project 25 is critical to achieving interoperability among different manufacturers’ products, such a suite is not yet fully developed. Further, ambiguities in published standards have led to incompatibilities among products made by different vendors and, to date, no compliance testing has been conducted to ensure that vendors’ products interoperate. Nevertheless, DHS has strongly encouraged state and local agencies to use grant funding to purchase Project 25 compliant equipment. Until DHS modifies its grant guidance to give states and localities the flexibility to address their communications equipment needs effectively, economically, and in a way that meets interoperability requirements as defined in their statewide plans, states and local agencies are likely to continue to purchase expensive equipment that provides them with minimal additional benefits. To better ensure that progress is made in improving interoperable communications among federal, state, and local first responders, we recommend that the Secretary of Homeland Security take the following five actions: assess how states’ grant requests support their statewide communications plans and include the assessment as a factor in making DHS grant allocation decisions; plan for new full-scale exercises for UASI areas that provides local officials with sufficient time to develop and implement exercises to validate the robustness and effectiveness of their tactical interoperable communications plans; develop and implement a program plan for SAFECOM and other OEC interoperability programs that includes goals focused on improving interoperability among all levels of government; include in the program plan for SAFECOM and other OEC interoperability programs quantifiable performance measures that can be used to determine the extent to which each of the goals have been accomplished and that can be used to assess the effectiveness and usefulness of SAFECOM tools, assistance, and outreach, and make improvements based on the feedback; and modify grant guidance to provide more flexibility in purchasing communications equipment until standards for completed interfaces have been fully defined and products have been certified compliant. We received written comments from the Deputy Secretary of Commerce and the Director of the DHS liaison office for GAO and the Office of the Inspector General. Letters from these agencies are reprinted in appendixes III and IV. Commerce provided updated information and technical comments to help ensure the information in the report is accurately perceived. We have incorporated these comments as appropriate. In its response to our five recommendations, DHS agreed with two, stated that it would defer commenting on two, and disagreed with one recommendation. Regarding our recommendation that DHS develop and implement a program plan for SAFECOM and other Office of Emergency Communications (OEC) interoperability programs that includes goals focused on improving interoperability among all levels of government, the Director indicated that DHS agrees with the intent of the recommendation and stated that the department is currently working to develop a program plan. However, DHS raised concern about the perceived implication that no action had been taken. It stated that SAFECOM has always had goals for improving interoperability among local, state, tribal, and federal emergency response agencies. Our review showed that while the program has had broad goals that include federal, as well as state and local agencies, its specific program goals and activities have not focused on improving interoperable communications between federal and other agencies. For example, one of the program’s goals is to increase interoperable communications capacity of local, tribal, and state public safety agencies, not federal agencies. Thus, it will be important for DHS to develop and implement a program plan that includes goals focusing on improving interoperability among all levels of government. DHS agreed with our recommendation to include quantifiable performance measures in the program plan for SAFECOM and other OEC interoperability programs. DHS indicated that it intends to establish such measures by the third quarter of 2007. DHS stated that it is deferring comments on two recommendations: (1) assess how states’ grant requests support their statewide communications plans and (2) plan for a new full-scale exercise for UASI areas to validate their interoperable communications plans. DHS disagreed with our recommendation that it modify grant guidance to provide more flexibility in purchasing communications equipment until standards for completed interfaces have been fully defined and products have been certified compliant with all aspects of the standards. The Director stated that the recommendation would require SAFECOM to amend its interoperability grant guidance until after the entire Project 25 suite of standards is complete and could undermine remaining negotiations between the public safety community and equipment manufacturers. We agree that development of a comprehensive suite of standards such as Project 25 is critical to achieving interoperability among different manufacturers’ products. We also agree that not all interfaces need to be fully defined before agencies can begin acquiring Project 25 products; thus we have clarified the recommendation to reflect this. However, we are not recommending that the public safety community be prohibited from acquiring Project 25 equipment, and thus we do not believe negotiations with equipment manufacturers would be undermined. Until critical interfaces are better defined and products have been certified compliant, DHS should allow state and local agencies the flexibility to purchase whatever products they can obtain that offer the best value and performance for their needs. DHS also stated that it estimates that the Project 25 standards will be complete within the next 18 to 24 months, and stated that fiscal year 2007 grant funding will be spent by local public safety agencies not in fiscal year 2007 but in subsequent years. We have modified the discussion of this issue in the report to reflect this information. However, as previously stated, much additional work remains to be accomplished. Additionally, DHS stated that our report should include other major programs that focus on interoperability among federal responders, such as the newly created Office of Emergency Communications within DHS, the Integrated Wireless Network, the Interoperable Communications Technical Assistance Program, and the Federal Partnership for Interoperable Communications. However, our report does discuss the first three of these. The primary purpose of the Federal Partnership for Interoperable Communications is to serve as a coordinating body to address technical and operational activities within the federal wireless community; it has limited applicability to state and local interoperability. Finally, DHS raised concern with our view that SAFECOM had mistakenly made local, tribal, and state emergency responders its highest priority. DHS stated that when SAFECOM was established as one of the electronic government initiatives, it was placed within the government-to- government portfolio. According to DHS, state and local government agencies are the primary customers of this portfolio. However, according to OMB, the goal of the government-to-government portfolio is to forge new partnerships among all levels of government, not just state and local. Additionally, as we have previously stated, when SAFECOM was initially established, one of its major goals was to achieve federal to state/local interoperability. However, it is no longer a goal for SAFECOM. DHS also stated that since 90 percent of the public safety infrastructure is owned, operated, and maintained by local jurisdictions, state and local interoperability is a higher priority. However, our review has shown that in major incidents such as a terrorist attack, a major hurricane, or wildland fire, federal, state, and local first responders will need to interoperate in order to respond effectively to the incident. Therefore, interoperability with federal first responders should be included as a key element in the department’s strategy for improving interoperable communications throughout the nation. DHS also provided technical comments, which we have incorporated as appropriate. We are sending copies of this report to the Secretaries of Homeland Security and Commerce and other interested congressional committees and subcommittees. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Should you or your staff have any questions on matters discussed in this report, please contact me at (202) 512-6240 or by e-mail at koontzl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. Our objectives were to determine (1) the extent to which the Department of Homeland Security (DHS) funding and technical assistance have helped to improve interoperable communications in selected states, (2) the progress the SAFECOM program has made in improving interoperable communications, and (3) the progress that has been made in the development and implementation of interoperable communications standards. To determine the extent to which DHS funding and technical assistance helped to improve interoperable communication in these states, we reviewed documentation and interviewed state and local officials from selected states. We selected four states as case studies, using the following criteria: All of the states must have received at least an average amount of funding from fiscal year 2003 through fiscal year 2005. One of the states must have received over $100 million of grant funding for interoperable communications from DHS. One of the states must have received assistance from SAFECOM in applying the Statewide Communications Interoperability Planning Methodology. One of the states must have had an Urban Area Security Initiative (UASI) area involved in DHS’s RapidCom program. One of the states must border another country. At least one of the states must be one of the top 10 states that regularly faces wildland fires. At least one of the states must be one of the top states that regularly faces other large natural disasters, such as hurricanes or earthquakes. We selected localities from each state to visit, which included (1) the UASI region which received the most funding from DHS, (2) the non-UASI county that received the largest amount of DHS funding, and (3) the county and city where the state capital is located. From each of these states and localities, we obtained and reviewed documentation such as grant funding amounts, Tactical Interoperability Communication Plans, exercise reports, and communication system documentation. We also met with interoperability committee members and first responders. Additionally, we obtained and analyzed documentation from, and met with DHS officials who are responsible for monitoring the use of DHS funds in each of these states. To determine the progress SAFECOM has made in improving interoperable communications, we reviewed SAFECOM documentation such as its Statewide Communication Interoperability Planning Methodology, Public Safety Architecture Framework, and Statement of Requirements. We also analyzed program management documentation (such as program goals, initiatives, and performance measures), interviewed SAFECOM officials to discuss the progress of the program, and interviewed state and local officials to determine their use of SAFECOM tools and guidance. To obtain Federal Bureau of Investigation (FBI) information, we relied on interviews conducted by another GAO team. To determine progress in developing and implementing interoperable communications standards, we obtained and reviewed documentation from National Institute of Standards and Technology (NIST) officials on standards development such as status updates and recent testimonies. Additionally, we reviewed documentation from states and localities to determine the extent to which they are implementing Project 25 products and spending on Project 25 products. We also met with officials from NIST and representatives from communications equipment manufactures. Because our objectives were focused on DHS efforts to improve interoperable communications, we neither assessed programs in other agencies, such as the Federal Communications Commission or the National Telecommunications and Information Administration, nor reviewed issues related to spectrum allocation. We performed our work in the Washington, D.C., metropolitan area; Tallahassee, Fort Myer, and Miami, Florida; Louisville, Frankfort, and Mount Sterling, Kentucky; Albany, Syracuse, and Brooklyn, New York; and Beaverton, Salem, and Medford, Oregon, from April 2006 to February 2007, in accordance with generally accepted government auditing standards. There is wide variation in the frequencies (i.e., very high frequency (VHF) and ultra high frequency (UHF)) and radio technologies (i.e., digital, analog, conventional, and trunked) that are used among federal, state, and local agencies within each of the four states we reviewed. A summary of communications systems and interoperability initiatives in each of these four states follows. There are over 150 radio systems in use within the state of Florida. To improve interoperability among these systems, Florida officials have developed several centralized solutions that are used throughout the state at all levels of government. Localities maintain their existing communications systems, relying on Florida’s statewide systems only when they need to interoperate with another agency or jurisdiction. According to DHS, Florida has received approximately $55.7 million in DHS funding from fiscal year 2003 through fiscal year 2005 to improve interoperable communications. Florida’s centralized approach entails making funding decisions through a body (the Domestic Security Oversight Council) supported by a hierarchy of communications-related committees that includes local representation from each of the seven regions in the state. According to state officials, for the statewide interoperability solutions, Florida does not allocate DHS funding to local agencies; it takes on the responsibility of centrally purchasing equipment to ensure that all agencies and jurisdictions have equipment that is compatible. UASI grants are awarded directly to the UASI areas; therefore, Florida does not centrally manage those funds. To improve the interoperability among the 150 disparate communications systems throughout the state, Florida officials have developed the following several statewide solutions: In 2003, the Domestic Security Oversight Council and its supporting communication committees determined that it would not be economically feasible to replace all existing systems in the state with one new system. It therefore decided to develop a “backbone” system that could connect with each of the existing systems. This system, referred to as the Florida Interoperability Network, establishes network connections between federal, state, and local dispatch centers across the state (see fig. 5). It enables dispatchers to connect first responders on disparate radio systems and frequencies to allow them to directly communicate with one another. Existing independent systems are maintained. According to state officials, as of January 2007, first responders in 64 of Florida’s 67 counties are able to have their communications patched to each other as needed via the network. As part of the Florida Interoperability Network, Florida officials are also working to establish additional mutual aid channels that are intended to provide radio service to first responders outside the range of their agency’s local system or when they need to communicate with users not on their local systems. These channels are intended to expand geographic coverage to ensure that, wherever they go, Florida’s first responders have radio communication capability. To this end, officials are adding 428 tower sites to the existing 93 sites across the state. Florida also acquired and implemented a radio communications system to serve law enforcement units of state agencies and to serve local public safety agencies through a mutual aid channel. The Statewide Law Enforcement Radio System provides state law enforcement officers with a shared digital, trunked radio system that serves over 6,500 users with 14,000 radios in patrol cars, boats, motorcycles, and aircraft. Florida’s first federally funded project was the Emergency Deployable Interoperable Communications Systems. These are mobile systems that can be deployed to a specific response area to patch together disparate communications systems. According to state officials, these systems are generally used in one of the following situations: (1) to tie different radio systems together in an area that is not connected to the Florida Interoperability Network, (2) to connect different radio systems together if the network becomes inoperable, or (3) to tie disparate radios together when assisting in an out-of-state incident, such as Hurricane Katrina. Nine of these systems were purchased and deployed throughout the state. Florida has seven Mutual Aid Radio Communications units in the state, and officials are building an additional unit. The units are stand-alone mobile interoperable communications networks. Unlike Emergency Deployable Interoperable Communications Systems, Mutual Aid Radio Communications units include a cache of radios that can be distributed to first responders, a tower, and a mobile repeater system, so no patching needs to be done. These units are used when the local communications systems become inoperable, such as when a hurricane destroys the local communications infrastructure. The units provide temporary infrastructure for a response area to maintain communication during an incident. Florida localities vary in their approaches and the level of interoperability within their regions. They utilize the statewide solutions to supplement their existing systems. For example, the 35 to 40 different radio systems throughout the Miami UASI area have limited direct interoperability. The Miami region relies on patching mechanisms, including the Florida Interoperability Network, to provide interoperable communications when needed. In contrast, according to officials, government agencies within Lee County, with the exception of the school board, utilize the same communications systems and, therefore, are all directly interoperable. The level of interoperability with surrounding counties varies. When they need to communicate with neighboring jurisdictions or state first responders, they utilize the interoperability network. While Kentucky first responders coordinate interoperability primarily by sharing frequencies and establishing patches, the state is establishing mutual aid channels to better enable responders on different frequencies to communicate through patches. According to DHS, from fiscal year 2003 through fiscal year 2005, Kentucky received $50 million from DHS for interoperable communications. Kentucky’s governance structure for interoperable communications is organized centrally at the state level through the Kentucky Wireless Interoperability Executive Committee. To ensure that the committee has an awareness of initiatives across the state, all state agencies and local government entities must present project plans for primary wireless public safety voice or data communications systems for review and recommendation by the committee, even if no state or federal funding is used for the system. While the committee only has the authority to decline or approve projects funded with state or federal dollars, a large majority of local projects are financed through state or federal funding. Kentucky’s strategy to improve interoperable communications in the near term is to utilize statewide mutual aid channels that allow agencies on different communication systems to tune into a dedicated frequency shared among one or more public safety agencies. Kentucky also plans to implement communications bridges to patch different systems together. The mutual aid approach requires the deployment of three channels, one for each frequency band that Kentucky public safety agencies currently use. Currently, approximately 34 percent of applicable agencies have signed a memorandum of understanding to commit to using the mutual aid channels. Other agencies that have not yet signed a memorandum are also utilizing the channels. Kentucky officials are also in the process of implementing a console-to-console bridge solution that will allow dispatchers to patch users on one frequency to users on another frequency (see fig. 6). For example, a first responder using a lower frequency who needs to talk to a first responder using a higher frequency would contact the Kentucky State Police dispatch center to request a patch. The dispatcher would then use a patching mechanism to patch the two channels so that the responders could talk directly to each other. The solution is operational in two of the three frequency bands and is nearing completion in the third. To supplement voice communications interoperability, Kentucky has implemented a wireless data communications interoperability solution as well. This solution provides functionality such as records management, real-time crime coverage, real-time data collection, and instant messaging. The system consists of approximately 165 base stations throughout the state to supply continuous wireless coverage in most regions. First responders use mobile data terminals to communicate with each other and, in many cases, retrieve information from their agency’s database. Kentucky’s mobile data network currently has coverage across approximately 95 percent of the state’s primary and secondary road systems. In the long term, the state officials intend to build a statewide public safety communications and interoperability infrastructure. They are in the process of completing a statewide baseline communications study as an initial step in the planning phase. No further specific initiatives and milestones have yet been identified for this project. Interoperability is typically coordinated at the city and county levels. In the jurisdictions we visited, interoperability solutions included planning in advance to program other frequencies into radios, establishing patches through disparate communication systems through a dispatch center, and swapping radios. In Louisville, Kentucky, both UHF and VHF systems are in use and, when necessary, connected through patching mechanisms. Many responders carry both a UHF and VHF radio in their vehicles. For major incidents, a mobile vehicle with a repeater system can be deployed to connect first responders. In addition, since 2000, Louisville has been utilizing a wireless data communications interoperability solution that includes 550 first responders in the Louisville metropolitan area. All local agencies within Franklin County use VHF systems; first responders program each others’ channels into their radios. Frankfort and Franklin County use mutual aid channels when needed. First responders have difficulty connecting to the Kentucky State Police, as that agency recently switched to a digital, trunked communications system. Currently, to connect to the state police, Frankfort and Franklin police contact a dispatch center and request a patch to Kentucky State Police. Montgomery County agencies use both UHF and VHF systems. First responders within the county and in neighboring counties typically program each others’ channels into their radios. Communication with state agencies varies, for example, fire and EMS agencies in Montgomery County cannot communicate with their state counterparts at present, whereas local police can communicate with the state police through mutual aid channels, or in instances in which they have interoperable radios. New York is currently in the process of implementing a statewide system that will connect all state agencies and offer connection services to local agencies. This initiative is being funded by the state. Localities continue to develop and maintain their own communication systems and interoperability solutions. According to DHS, from fiscal year 2003 through fiscal year 2005, New York State has received $145.5 million in grant funding for interoperable communications. New York has established a Statewide Interoperability Executive Committee that is currently working to establish a state interoperability plan. In addition, there are several different groups throughout New York that are involved with interoperability at the state and local level. According to state officials, the governance structure limits the state’s ability to mandate requirements to local governments; therefore, individual counties and cities determine their own interoperability requirements and have their own governance structure in place for interoperable communications. The state, however, determines priority investments and the localities must spend grant money on these priority investments. Interoperable communications was a priority investment for both grants for fiscal year 2006. The state is currently in the process of deploying a Statewide Wireless Network intended to provide an integrated mobile radio communications network that links all state agencies and would be available to connect participating local agencies (see figure 7). It will be a digital, trunked radio system with both voice and data capabilities and will be used in day-to-day operations, as well as large scale emergency situations. The network is to interconnect radio sites across the state through a “backbone network” based on Internet Protocol (IP). The network is to operate on the 700 and 800 MHz frequencies, as well as VHF frequencies in geographically challenging terrain, such as the Adirondack and Catskill Mountains. Users operating on other frequencies and with less advanced technology can be connected to the network through a gateway. State agencies are required to be a part of the Statewide Wireless Network, but local agencies may join on a volunteer basis. As previously mentioned, according to state officials, they are limited in their ability to require local agencies to utilize the network. Local agencies will have the following three different interoperability options: Full system partnership: the state will provide the base infrastructure such as radio towers, and the agency will purchase IP-addressable, digital, trunked radios, as well as any additional repeaters to operate on the network. Interface/gateway partnership: allows local agencies to maintain their own separate network and provides a connecting gateway between a local agency’s dispatch console and the network. Shared communication system infrastructure: states and localities both use the same towers for their separate systems, but there is no mechanism for patching communications between the state and local systems. New York is implementing the Statewide Wireless Network in several phases and expects full implementation to be completed in September 2010. Even though joining this state network is free, localities need to buy digital, trunked, and IP-addressable radios to participate directly, and additional infrastructure such as repeaters to get complete coverage in urban areas and buildings. Throughout the state of New York, many different communications systems exist. Each area has developed its own methods aimed at improving interoperability. Additionally, localities generally do not include the Statewide Wireless Network as part of their local approach to improving interoperable communications. As of December 2006, one agency in New York City and only 7 of the 62 counties in New York have partnered with the network to be full system users. Twenty-five counties have agreed to connect through a gateway. In the New York City UASI area, the police department maintains six channels for citywide interoperability. Any agency can use these channels by signing a memorandum of understanding and ensuring that they meet the necessary technical requirements. Additional interoperability strategies used by the New York City UASI include using a federal interoperability channel and utilizing and deploying mobile patching devices to connect disparate systems at an incident site. In addition, New York City is working to develop the City-wide Mobile Wireless Network, which is intended to provide police and fire first responders with high-speed data access to support large file transfers, including federal and state anticrime and antiterrorism databases, fingerprints, and maps. Further, the city has implemented a regional wide-area interoperability system that is New York City’s primary interoperability network for first responders in the city. It is currently being expanded to include first responders in Nassau, Suffolk, and Westchester Counties, and parts of New Jersey. Agencies in Albany County typically interoperate by programming the frequencies of other agencies into their radios, including agencies in neighboring counties. The county also has a patching mechanism that can connect different radio networks during an emergency. To improve its interoperability and connect the county to neighboring counties, Albany County is currently in the process of developing a countywide system. This system will use gateways to connect existing systems that operate on different frequency bands and allow all public safety responders within the county to communicate with any other responder in Albany County regardless of the radio system or technology used. Albany is also currently developing a fiber optic system that will connect all 12 Public Safety Access Points in the county. Onondaga County relies on dispatchers to connect first responders. All dispatching for Onondaga County is centralized at the county’s 911 call center. To improve its interoperability, Onondaga County is currently working to implement a countywide digital system that will connect all county agencies. Oregon is currently in the process of planning a statewide system to connect all state agencies and provide a means for local agencies to be patched to users on the statewide system. Localities continue to develop and maintain their own communication systems and interoperability solutions. According to DHS, Oregon has received $53.4 million from fiscal year 2003 through fiscal year 2005 in grant funding to improve interoperable communications. Oregon has a State Interoperability Executive Council to centrally manage Oregon’s interoperable communications. This body is composed of state and local representatives. This committee requires that each county prepare a communications plan. Additionally, the committee is in the process of developing a statewide interoperable communications plan that incorporates all the county plans. Most state agencies are currently using VHF and UHF analog, conventional radio systems, which are in some cases 30 years old and in need of major repairs and upgrades. Oregon state agencies experience significant coverage gaps in their existing communications systems due to a lack of transmission towers. Additionally, these state systems are not always interoperable with federal or local systems. In the absence of shared radio systems among federal, state, and local first responder agencies, Oregon’s state agencies use various alternative approaches to establish interoperable communications with agencies they work with on a regular basis, such as using a dispatcher or patching devices to establish connections between disparate radio systems, and lending radios to first responders from other agencies. Due to the deteriorating status of the Oregon’s state agencies’ communication systems, State Interoperability Executive Council officials have been working with contractors to develop a concept for a new state system. The Oregon Wireless Interoperability Network is to be a Project 25, trunked, digital radio network that will rely on an IP interface to interoperate with state agencies’ subsystems. Plans for the interoperability network are to allow the majority of state agencies to operate on a unified trunked system while supporting conventional operations where and when required. These officials plan to issue a contract to a vendor by October 2007 and implement the first phase of the network by October 2009. The Oregon Wireless Interoperability Network is intended to be the primary system for state agencies; local agencies will be expected to maintain their existing systems as their primary systems and use the network as their secondary system. A patching mechanism would be established to allow local agencies to be connected to state agencies, as well as allow them to be connected to other local agencies that they do not already have interoperability. Figure 8 is a depiction of the interoperability network concept as currently envisioned. Local agencies use a wide range of radio frequencies and communication technologies and have various strategies and solutions to improving interoperability. In particular, Marion County uses analog UHF and VHF systems; and trunked, as well as conventional radios. Officials stated that they have limited interoperability with state and federal agencies and that they, therefore, maintain a cache of 30 radios available to share, when needed. Additionally, they can use a mobile command unit that can be deployed to any area and contains another cache of radios. In the Portland UASI, four of the five counties use 800 MHz, analog, trunked radio systems that provide direct interoperability among those four counties. The fifth county is on a separate VHF system. They have installed equipment to improve the interoperability with this fifth county. Additionally, to provide interoperability with the fifth county and other agencies outside the UASI area, the officials use mechanisms such as a mobile trailer to patch calls and loan radios from its cache of radios. Jackson County agencies generally use conventional, VHF, analog radio systems. Officials indicated that although two of the cities within the county are digitally capable, their first responders use the analog mode due to the fact that many of their neighboring jurisdictions do not have digital radios. In order to interoperate with jurisdictions on different systems, they use common radio channels, patching mechanisms, as well as a mobile command vehicle that is equipped with a cache of radios in different frequencies and a patching device. In addition, Jackson County and Josephine County are developing a communications system that connects the two counties. In addition to the individual named above, John de Ferrari, Assistant Director; Neil Doherty; Richard Hung; Tom Mills; Shannin O’Neill; Karen Talley; Amos Tevelow; and Jayne Wilson made major contributions to this report. | As the first to respond to natural disasters, domestic terrorism, and other emergencies, public safety agencies rely on timely communications across multiple disciplines and jurisdictions. It is vital to the safety and effectiveness of first responders that their electronic communications systems enable them to communicate with whomever they need to, when they need to, and when they are authorized to do so. GAO was asked to determine, among other things, (1) the extent to which Department of Homeland Security (DHS) funding and technical assistance has helped to improve interoperable communications in selected states and (2) the progress that has been made in the development and implementation of interoperable communications standards. To address these objectives, GAO reviewed grant information, documentation of selected states' and localities' interoperability projects, and standards documents. According to DHS, $2.15 billion in grant funding was awarded to states and localities from 2003 through 2005 for communications interoperability enhancements. This funding, along with technical assistance, has helped to make improvements on a variety of specific interoperability projects. However, states that GAO reviewed had generally not used strategic plans to guide investments toward broadly improving interoperability. Further, no national plan was in place to coordinate investments across states. To its credit, DHS has required states to implement a statewide plan by the end of 2007, and DHS has recently been required to implement a National Emergency Communications Plan. However, no process has been established for ensuring that states' grant requests are consistent with their statewide plans. Until DHS takes a more strategic approach to improving interoperable communications--such as including in its decision making an assessment of how grant requests align with statewide communications plans--progress by states and localities in improving interoperability is likely to be impeded. Until recently, the private-sector coordinating body responsible for developing Project 25 standards--a suite of national standards intended to enable interoperability among the communications products of different vendors--has made little progress. Although one of the eight major subsets of standards was defined in the project's first 4 years (from 1989 to 1993), from 1993 through 2005, no additional standards were completed that could be used to develop Project 25 products. Specifications for three additional subsets of standards were defined over the past 2 years. However, ambiguities in the published standards have led to incompatibilities among products made by different vendors, and no compliance testing has been conducted to determine if these products are interoperable. Nevertheless, DHS has strongly encouraged state and local agencies to use grant funding to purchase Project 25 radios, which are substantially more expensive than non-Project 25 radios. As a result, states and local agencies have purchased fewer, more expensive radios that still may not be interoperable and thus may provide few added benefits. Until DHS modifies its grant guidance to provide more flexibility in purchasing communications equipment, states and localities are likely to continue to purchase expensive equipment that provides them with minimal additional benefits. |
The Park Service is the caretaker of many of the nation’s most precious natural and cultural resources. Today, more than 130 years after the first national park was created, the National Park System has grown to include 390 units covering over 84 million acres in 49 states, the District of Columbia, American Samoa, Guam, Puerto Rico, Saipan, and the Virgin Islands. The Park Service manages its responsibilities through headquarters, seven regional offices, and its individual park units. These units include a diverse mix of sites—now in more than 20 different categories. These include (1) national parks, such as Yellowstone in Idaho, Montana, and Wyoming; Yosemite in California; and Grand Canyon in Arizona; (2) national historical parks, such as Harper’s Ferry in Maryland, Virginia, and West Virginia; and Valley Forge in Pennsylvania; (3) national battlefields, such as Antietam in Maryland; (4) national historic sites such as Ford’s Theatre in Washington, D.C.; and Carl Sandburg’s home in North Carolina; (5) national monuments, such as Fort Sumter in South Carolina; and the Statue of Liberty in New York and New Jersey; (6) national preserves, such as Yukon-Charley Rivers in Alaska; and (7) national recreation areas, such as Lake Mead in Arizona and Nevada. Some of these park units, such as Yellowstone, cover millions of acres and employ hundreds of employees. Other units, such as Ford’s Theatre which encompasses two historic structures, are small and have few employees. The Park Service’s mission is to preserve unimpaired the natural and cultural resources of the National Park System for the enjoyment of this and future generations. Its objectives include providing for the use of the park units by supplying appropriate visitor services and infrastructure (e.g., roads and facilities) to support these services. In addition, the Park Service protects its natural and cultural resources (e.g., preserving wildlife habitat and Native American sites) so that they will be unimpaired for the enjoyment of future generations. Due to the complexity of its mission, large land area, and the number and diversity of its park units, the Park Service faces many challenges—including a deteriorating infrastructure (due in part to an estimated $5 billion maintenance backlog), threats to preserving natural and cultural resources, and challenges to maintaining visitor services. Moreover, despite fiscal constraints facing all federal agencies, the number of park units continues to expand—12 units, mostly small units, have been authorized since fiscal year 2001. The Park Service receives its main source of funds to operate park units through appropriations from the ONPS account. The Park Service chooses to allocate funds to its park units in two categories—allocations for daily operations, and allocations for specific, non-recurring projects. Daily operations allocation levels for individual park units are built on park units’ allocation level for the prior year. Park units receive an increased allocation for required pay increases and request specific increases for new or higher levels of ongoing operating responsibilities, such as adding additional law enforcement rangers for increased homeland security protection. Park Service headquarters takes the initiative in requesting the funding for all required employee pay increases on a service wide basis. However, for park-specific increases, once funding is appropriated, park units compete against one another through their regional office and headquarters for the available funds. As is true for other government operations, the cost of operating park units will increase each year due to required pay increases, the rising costs of benefits for federal employees, and rising overhead expenses such as utilities. The Park Service may provide additional allocations for daily operations to cover all or part of these cost increases. If the continuation of operations at the previous year’s level would require more funds than are available, park units must adjust either by identifying efficiencies within the park unit, use other authorized funding sources such as fees or donations to fund the activity, or reduce services. Upon receiving their allocations for daily operations each year, park unit managers exercise a great deal of discretion in setting operational priorities. Typically, these decisions involve trade-offs among four categories of spending: (1) visitor services (e.g., opening a campground or adding law enforcement staff), (2) resource management (e.g., monitoring the condition of threatened species or water quality), (3) maintenance needs (e.g., repairing a trail), and (4) park administration and support (e.g., updating computer systems or attending training). Generally, about 80 percent of each park unit’s allocation for daily operations is used to pay the salaries and benefits of permanent employees (personnel costs). Park units use the remainder of their allocations for daily operations for overhead expenses such as utilities, supplies, and training, among other things. In addition to daily operations funding, the Park Service also allocates project-related funding to park units for specific purposes to support its mission. For example, activities completed with Cyclic Maintenance and Repair and Rehabilitation funds include re-roofing or re-painting buildings, overhauling engines, refinishing hardwood floors, replacing sewer lines, repairing building foundations, and rehabilitating campgrounds and trails. Park units compete for project allocations by submitting requests to their respective regional office and headquarters. Regional and headquarters officials determine which projects to fund. While an individual park unit may receive funding for several projects in one year, it may receive none the next. Park units may also receive revenue from outside sources such as visitor fees and donations—although there are often limitations on how these revenues may be used. Since 1996, the Congress has provided the park units with authority to collect fees from visitors and retain these funds for use on projects to enhance recreation and visitor enjoyment, among other things. Since 2002, the Park Service has required park units to spend the majority of their visitor fees on deferred maintenance projects, such as road or building repair. The Park Service also receives revenue from concessionaires under contract to perform services at park units—such as operating a lodge—and cash or non-monetary donations from non-profit organizations or individuals, among others. For example, as we reported in July 2003, about 200 cooperating associations and “friends groups” helped support 347 park units, contributing over $200 million from 1997 to 2001. These funds may vary from year to year and, in the case of donations, may be accompanied by stipulations on how the funds may be used. Figure 1 illustrates the principal funding sources used by park units to perform operations. Total appropriations for the ONPS account increased overall in both nominal and inflation-adjusted dollars from fiscal year 2001 through 2005. However, the agency allocated funds such that, in inflation-adjusted terms, the total allocation for daily operations from these appropriations fell slightly overall, while the total allocation for projects increased overall. About 56 percent of the individual park units and about 74 percent of the more highly visited parks experienced an overall decline in their allocation for daily operations when adjusted for inflation during this period. The agency allocated funding for projects at a higher rate than for daily operations. As shown in figure 2, overall appropriations for the ONPS account— including the amounts the Park Service allocated for daily operations and projects—rose in both nominal and inflation-adjusted dollars overall from fiscal years 2001 through 2005. Nominal dollars increased from about $1.4 billion in fiscal year 2001 to almost $1.7 billion in fiscal year 2005, an average annual increase of about 4.9 percent (i.e., about $68 million per year). After adjusting these amounts for inflation, the average annual increase was about 1.3 percent or almost $18 million per year. By contrast, the Park Service’s overall budget authority increased to about $2.7 billion in 2005 from about $2.6 billion in 2001, an average increase of about 1 percent per year. In inflation adjusted dollars, the total budget authority fell by an average of about 2.5 percent per year. With the increases in appropriations for the ONPS account, the Park Service increased allocations for projects and other support programs such as the Repair and Rehabilitation, Cyclic Maintenance, and Inventory and Monitoring programs, among others. The overall allocation for daily operations, on the other hand, declined slightly on average when adjusted for inflation. The Park Service’s total allocation for daily operations for park units increased overall in nominal dollars but the total allocation fell slightly when adjusted for inflation from fiscal years 2001 through 2005. As illustrated in figure 3, overall allocations for daily operations for park units rose from about $903 million in fiscal year 2001 to almost $1.03 billion in fiscal year 2005—an average annual increase of about $30 million, or about 3 percent. After adjusting for inflation, the allocation for daily operations fell slightly from about $903 million in 2001 to about $893 million in 2005— an average annual decline of about $2.5 million, or 0.3 percent. The fiscal year 2005 appropriation for the ONPS account included an additional $37.5 million over the amounts proposed by the House and Senate for the ONPS account, to be used for daily operations. The conference report accompanying the appropriation stated that the additional amount was to be used for (1) a service-wide increase of $25 million and (2) $12.5 million for visitor services programs at specific park units. Of the 380 park units that received funding for daily operations for the entire period of our review, 212 (or about 56 percent), saw an average annual decline in inflation-adjusted terms of about 2 percent. The declines ranged from less than 0.1 percent at the Mary McLeod Bethune Council House National Historic Site to about 5.2 percent at Petroglyph National Monument. The remaining 168 park units’ daily operations funding trends were either flat or increasing from 2001 through 2005, with the largest increase being about 39 percent at Rosie the Riveter/WWII Home Front National Historic Park. Figure 4 shows the number of park units and their respective average annual percent changes in daily operations allocations from 2001 through 2005. The park units for which figure 4 shows declines in inflation-adjusted dollars allocated for daily operations include most of the park units with large allocations for daily operations. These 212 park units represented about 69 percent of the total allocation for daily operations for all park units in fiscal year 2001 and about 64 percent in fiscal year 2005. Conversely, the 168 park units for which figure 4 shows increases in inflation-adjusted terms in allocations for daily operations represented about 31 percent of the total allocations for daily operations for all units in fiscal year 2001 and about 36 percent in fiscal year 2005. About seventy-four percent of the 83 most highly visited park units—over one million recreation visits per year—showed an average annual decline in inflation-adjusted terms in daily operations allocations from fiscal years 2001 through 2005. For example, allocations for daily operations at Lake Meade National Recreation Area (includes Parashant National Monument), Hawaii Volcanoes National Park, and Olympic National Park fell in real terms by about 4 percent, 3 percent, and 2 percent, respectively. In contrast, about 47 percent of the park units with less than 200,000 recreation visits per year saw declines in real terms of the allocations for daily operations. Table 1 shows the number and percentage of park units receiving average annual percentage increases and declines in inflation adjusted allocations for daily operations by categories of average annual recreation visits. Allocations for projects and other support programs increased overall in both nominal and inflation-adjusted dollars. As figure 5 illustrates, these allocations rose from about $478 million in 2001 to about $641 million in 2005—an average annual increase of about 7.7 percent, or about $36.5 million. When adjusted for inflation, the increase was 3.9 percent, or about $18.7 million per year. Figure 5 shows allocation trends of projects and other support programs for the Park Service from fiscal years 2001 through 2005. Three programs that include project funding for individual park units— Cyclic Maintenance, Repair and Rehabilitation, and Inventory and Monitoring—account for over half of the increase for the project and support program allocations. As a percentage of total project and support program funding, funding for these programs rose to 31 percent in 2005 from 23 percent in 2001. For example, cyclic maintenance program funding increased from $34.5 million in 2001 to $62.8 million in 2005—an average annual increase of 16.2 percent in nominal terms or 12.1 percent when adjusted for inflation. Increases in the Cyclic Maintenance and Repair and Rehabilitation programs reflect an emphasis on the effort for the Park Service to reduce its estimated $5 billion maintenance backlog. Increases in the Inventory and Monitoring Program reflect an emphasis on protecting natural resources primarily through an initiative called the Natural Resource Challenge. Table 2 shows funding for these three programs from fiscal years 2001 through 2005. Allocations for other support programs had smaller increases or declined. For example, allocations for central offices—seven regional offices and the headquarters office—increased by less than 1 percent on an average annual basis when adjusted for inflation. Between fiscal years 2001 and 2005, the share of the ONPS account allocated to daily operations fell slightly, indicating a slight change in emphasis toward project-related programs for park units. In fiscal year 2001, about 65 percent of the Park Service’s appropriations from the ONPS account were allocated for daily operations. By 2004, the allocation for daily operations had fallen to about 60 percent, increasing slightly to about 62 percent for fiscal year 2005. Figure 6 shows the trend for the ratio of daily operations allocations to overall funding for operations for fiscal years 2001 through 2005. As shown in figure 7, total visitor fees collected by the Park Service increased from about $140 million in 2001 to about $147 million in 2005 (an average annual increase of about 1 percent); however, in inflation-adjusted dollars, the fees fell to about $127 million in 2005 (an average annual decline of over 2 percent). Overall, the Park Service collected about $717 million in visitor fees in addition to their annual appropriation for operations from 2001 through 2005—an average of about $143 million per year. When adjusted for inflation, these visitor fees total about $670 million—an average of about $134 million per year. Visitor fee revenue depends on several factors, including the number of visitors to each park unit, the number of national passes purchased, and the amount each park charges for entry and services. All 12 park units we visited received allocations for projects from fiscal years 2001 through 2005 that varied among years and among park units. Allocations for daily operations for the 12 park units we visited also varied. On an average annual basis, each unit experienced an increase in daily operations allocations, but most experienced a decline in inflation-adjusted terms. Officials at each park believed that their daily operations allocations were not sufficient to address increases in operating costs and new Park Service management requirements. To manage within available funding resources, park unit managers also reported that, to varying degrees, they made trade-offs among the operational activities—which in some cases resulted in reducing services in areas such as education, visitor and resource protection, and maintenance activities. Park officials also reported that they increasingly relied on volunteers and other authorized funding sources to provide operations and services that were previously paid with allocations for daily operations from the ONPS account. Each of the 12 park units received allocations for projects from 2001 through 2005. Park units use project-related allocations for such things as rehabilitating structures, roads, and trails and inventorying and monitoring natural resources. The allocations for projects at the 12 park units totaled $76.8 million from 2001 through 2005. Allocations varied from park to park and year to year because these allocations support non-recurring projects for which park units are required to compete and obtain approval from Park Service headquarters or regional offices. For example, at Grand Canyon National Park, allocations for projects between 2001 and 2005 totaled $6.7 million. However during that time the amount fluctuated from $824,000 in 2001 to $1.9 million in 2004 and $914,000 in 2005. Table 3 shows project-related allocations and their fluctuations from fiscal years 2001 through 2005 for the 12 park units we visited. The following examples illustrate projects that have been completed using these funds: Grand Canyon National Park received a total of $6.7 million in project allocations. Projects included $494,000 to repair and rehabilitate the North Bass trail; $175,000 to rehabilitate the Mather Amphitheater, which hosts evening ranger programs; and $31,000 to survey the declining northern leopard frog population. Grand Teton National Park received a total of $4.4 million in project allocations. Projects included $40,600 to perform cyclic maintenance on three historic log cabins; $280,000 for bison demographic disease surveillance; and $313,800 to rehabilitate a water sewer line. Acadia National Park received a total of $3.6 million in project allocations. In 2002, the park obtained $17,800 through the Natural Resource Preservation Program to work with the U.S. Fish and Wildlife Service and others to determine baseline information about the ecology and to assess the population status of wintering purple sandpipers. Gettysburg National Military Park received a total of $11.6 million in project allocations. Projects included $444,000 to replace failing septic systems in the park; $129,000 to replace water lines in historic structures; $385,000 to repair observation towers; and $92,000 to repair historic fences on Little Roundtop—a highly-visited civil war battle site. Yellowstone National Park received a total of $3.2 million in project allocations. Projects included $170,000 to repair thermal area walkways, and $290,000 to rehabilitate roads in the Madison area of the park. As with allocations for projects from fiscal years 2001 through 2005, allocations for daily operations for the 12 park units we visited also varied. All 12 park units experienced an annual average increase in allocations for daily operations, however when adjusted for inflation, 8 of the 12 parks we visited experienced a decline ranging from less than one percent to approximately 3 percent. For example, Yosemite National Park’s daily operations allocations increased from $22,583,000 in 2001 to $22,714,000 in 2005, less than an average of 1 percent per year. However, when adjusted for inflation, the park’s allocation for daily operations fell by about 3 percent per year. Daily operations allocations at the remaining four parks increased after adjusting for inflation, ranging from less than 1 percent to about 7 percent. For example, Acadia National Park’s daily operations allocations increased from $4,279,000 in fiscal year 2001 to $6,498,000 in fiscal year 2005, an average annual increase of about 11 percent in nominal terms and about 7 percent when adjusted for inflation. Park officials explained that although the daily operations allocation substantially increased over this period, most of the increase was for new or additional operations. To illustrate, in 2002, Acadia acquired the former Schoodic Naval Base. The increases in allocations for daily operations were to accommodate this added responsibility rather than for maintaining operations that were in existence prior to the acquisition. In addition, park officials at Mount Rushmore National Memorial reported that most of their increases for daily operations were to increase law enforcement staff to address new homeland security measures following the September 11, 2001, attacks. Tables 4 and 5 show allocations for daily operations and average annual increases or declines for the 12 park units we visited, from fiscal years 2001 through 2005. Despite increases in inflation-adjusted allocations for daily operations at 4 of the 12 park units visited, officials at all 12 park units explained that this funding did not increase commensurately with increases in operating costs and new management requirements. Park unit officials explained that these factors have reduced their flexibility in addressing other park priorities. Park unit officials reported that required salary increases exceeded the allocation for daily operations, and rising utility costs have reduced their flexibility in managing daily operations allocations. Park Service headquarters officials reported that from 2001 through 2005, the Park Service paid personnel cost increases enacted by the Congress. For example, from fiscal years 2001 through 2005, Congress enacted salary increases of about 4 percent per year for federal employees. Park Service officials reported that the Park Service covered these salary increases with appropriations provided in the ONPS account. The Park Service allocated amounts to cover about half of the required increases and park units had to reduce spending to compensate for the difference. The consequence of the increases was that park units had to eliminate or defer spending in order to accommodate the increases. For example, officials at Gettysburg National Military Park stated that they achieved personnel cost savings by taking a number of actions to reduce spending, including refraining from filling— and delaying filling—several permanent and seasonal vacancies. Park officials said they estimate the personnel cost savings from 2002 through 2005 was about $1,434,781, in inflation-adjusted terms. Total personnel expenditures at the park unit declined from $4,460,000 in 2001 to $4,143,000 in 2005—an average annual decline of about 2 percent, in inflation-adjusted terms. In contrast, at Mount Rushmore National Memorial, total personnel expenditures increased from $2,014,000 in 2001 to $2,552,000 in 2005—or an average of about 6 percent per year. Officials said that the increase was due to required salary increases for permanent staff and expenditures on new personnel hired for homeland security measures. As shown in table 6, expenditures for personnel from 2001 through 2005 increased for seven park units, and declined for the other five units, after adjusting for inflation. Personnel costs (salaries and benefits) comprised an average of 74 to 89 percent of the total operating expenses at these 12 park units; therefore officials said that it is difficult to offset increases in personnel costs without reducing personnel. Officials at several park units told us that since 2001, they have refrained from filling vacant positions or have filled them with lower-graded or seasonal employees. For example, in an effort to continue to perform activities that directly impact visitors—such as cleaning restrooms and answering visitor questions—officials at Sequoia and Kings Canyon National Parks stated that they left several high-graded positions unfilled in order to hire a lower graded workforce to perform these basic operational duties. Officials at most park units also told us that when positions were left vacant, the responsibilities of the remaining staff generally increased in order to fulfill park obligations. Park Service budget officials told us that they expect personnel costs to continue to grow faster than any increases in allocations for personnel in fiscal years 2006 and 2007. As a result, they said that in some cases the parks may choose to hire seasonal employees or contract out more duties than fill vacant positions. Table 7 shows the average annual percentage of daily operations funding that the 12 park units we visited spent on personnel costs. In addition, Park Service budget officials said that park units’ personnel costs have also increased because they pay more of the costs of benefits for employees under the newer Federal Employee Retirement System (FERS) than they do for employees under the older Civil Service Retirement System (CSRS). As a result, the officials said that total compensation (salary and benefits) is higher for a FERS employee at the same salary level as a CSRS employee. Unlike CSRS, for example, FERS requires federal agencies to match up to 5 percent of employees’ contributions to their retirement account. In addition, as CSRS employees retire and are replaced by FERS employees, the officials said that the Park Service’s personnel costs will increase, when all else remains the same. At the park units we visited, benefits paid to FERS employees rose at a faster rate and were generally higher on average than those for CSRS employees. At almost all the park units, average total compensation for a CSRS employee exceeded that for a FERS employee. For instance, at Shenandoah National Park, average benefits for a FERS employee increased at an annual rate of about 3 percent from 2001 through 2005 compared with about 2 percent per year for a CSRS employee (adjusted for inflation). In 2005, the average FERS total compensation was $44,242, including $11,713 for benefits, compared to an average CSRS total of about $54,134, including $9,401 for benefits. Tables 16 and 17 in appendix III show nominal and inflation-adjusted personnel costs per retirement system at the 12 park units we visited. In addition to increasing personnel costs, officials at many of the park units we visited explained that rising utility costs caused parks to reduce spending in other areas. For example, at Grand Teton National Park, park officials told us that to operate the same number of facilities and assets, costs for fuel, electricity, and solid waste removal increased from $435,010 in 2003 to $633,201 in 2005—an increase of 46 percent, when adjusted for inflation. Officials told us that, as a result, their utility budget for fiscal year 2005 was spent by June 2005—three months early. In August, the park accepted the transfer requests of two division chiefs and used the salaries from these vacancies to pay for utility costs for the remaining portion of the year. Officials at some parks attributed increased utility costs to new construction that was generally not accompanied with a corresponding increase to their allocation for daily operations. In 2003, Yellowstone National Park constructed The Heritage Center with line item construction appropriations to house 5.3 million artifacts of natural and cultural significance. In 2001, the park officials requested but did not receive an additional $250,000 that they estimated was required to pay for the center’s costs for power, water, sewer, and information technology. A Park Service headquarters official told us that while there is a need to replace old facilities with new construction, it is unlikely—given the overall fiscal demands on the federal government—that park units will receive corresponding increases in funding for daily operations necessary to operate new facilities. Officials at most of the park units we visited also told us that their units generally did not receive additional allocations for administering new Park Service policies directed at reducing its maintenance backlog, implementing a new asset management strategy, or maintaining specified levels of law enforcement personnel (referred to as its no-net-loss policy) which has reduced their flexibility in addressing other park priorities. While officials stated that these policies were important, implementing them without additional allocations reduced their management flexibility. Over the years, the estimates of the amount of the agency’s deferred maintenance backlog have varied widely—sometimes by billions of dollars. Since 1998, we have issued several reports on the agency’s efforts to reduce its backlog. Since 2001, the Park Service has placed a high priority on reducing its currently estimated $5 billion maintenance backlog. In response, the Park Service, among other things, set a goal to spend the majority of its visitor fees on deferred maintenance projects—$75 million in 2002 increasing to $95 million in 2005. Officials at several park units report that they have used daily operations allocations to absorb the cost of salaries for permanent staff needed to oversee the increasing number of visitor fee-funded projects. Park officials reported that the additional administrative and supervisory tasks associated with these projects add to the workload of an already-reduced permanent staff. For example, at Acadia National Park, officials told us that although visitor fee-funded projects have benefited the park, supervisors have reduced the extent to which they supervise their existing daily operating staff in order to manage temporary staff working on visitor fee-funded projects. While the Park Service may use visitor fees to pay salaries for permanent staff that manage and administer projects funded with visitor fees, it has a policy prohibiting such use. Instead, these salaries are paid using allocations for daily operations which reduce the amount of the allocation available for visitor services and other activities and limit the park units’ ability to maintain these services and activities. Park Service headquarters officials recognize the strain that its policy has had on allocations for daily operations. Park Service headquarters officials said that its policy was first established under the original Recreational Fee Demonstration Program and provided several reasons for doing so. First, it did not want park units to use the revenue to hire more permanent staff than the park units needed. In addition, the Park Service wanted the revenue to be used for projects that provided visible results, such as rehabilitating a visitor facility, rather than on salaries for permanent employees. It also did not want to use visitor fee revenue to hire permanent staff because the recreational fee demonstration authority was temporary, therefore forcing park units to find another funding source to pay permanent employee salaries if the authority was discontinued. However, due to the strain this policy has had on allocations for daily operations combined with the recent passage of the Federal Lands Recreation Enhancement Act, which provides longer-term authority (10 years) for collecting visitor fees, Park Service headquarters officials stated that they are considering changing this policy. To alleviate the pressure on funds for daily operations, we believe it would be appropriate for the Park Service to follow through with revising this policy. In an effort to better manage its maintenance backlog and improve asset management, the Park Service implemented a new asset management initiative in 1998. As a part of this initiative, park units are required to complete condition assessments and maintain this data in the Facility Maintenance Software System (FMSS), a system-wide, integrated software management tool to track parks’ assets, their condition, and the costs needed to keep each asset in a good operating condition. Overall, park managers viewed this new system as a worthwhile endeavor. However, park officials explained that their units were not provided additional funds needed to implement this new responsibility. As a result, most of the parks used existing staff to inventory assets and enter the data into the software system at the expense of their primary duties. According to officials at many of the park units we visited, staff no longer had sufficient time to perform primary duties and responsibilities, such as regularly scheduled preventative maintenance or bathroom cleaning. The effect of implementing FMSS was particularly problematic for park units whose maintenance divisions were already operating with a reduced staff. For example, Badlands National Park, which has lost seven maintenance division employees since 2001, used the equivalent of two full time employees and two seasonal employees to enter data and work on other duties related to FMSS. Because the park had to use existing staff to comply with new asset-management requirements, regularly scheduled activities such as painting buildings and other structures were deferred, thus adding to its maintenance backlog. Another new Park Service policy impacting park units relates to its law enforcement personnel. In response to studies that described the level of law enforcement personnel as approaching a level for which basic resource and visitor protection may be in jeopardy, the Park Service, in 2002, implemented a no-net-loss policy for law enforcement personnel. Accordingly, Park Service headquarters directed the park units to not fall below 2002 law enforcement employee levels. Thus, unlike other divisions, when law enforcement positions become vacant, officials are required to fill the vacancy or request a waiver of the policy. For those park units that have adhered to the policy, officials told us that they have had to forgo hiring what they consider other priority vacant positions in other divisions in order to comply with the no-net-loss policy. Officials at other park units have been unable to maintain 2002 levels, either because they were unable to afford to re-hire vacant positions or because other vacant positions were deemed by park management to also be a priority. In response to allocations for daily operations trends, increased costs, and new policy requirements, park officials at the 12 park units we visited said that activities funded with daily operations have been reduced or eliminated, delayed until other authorized funding sources became available, or performed with the use of other authorized funding sources. Park managers reported that because they have to manage within available funding resources, they make trade-offs among the operational activities such as education, visitor and resource protection, and maintenance activities. The extent and type of such responses vary among the park units. To address differences between allocations for daily operations and expenses, officials at the park units we visited reported that they reduced or eliminated some services paid with daily operations allocations— including some that directly affected visitors and park resources. Park officials at some of the park units we visited told us that before reducing services that directly affect the visitor; they first reduced spending for training, equipment, travel, and supplies paid from daily operations allocations. However, most park units reported that they did reduce services that directly affect the visitor including reducing visitor center hours, educational programs, basic custodial duties, and law enforcement operations, such as backcountry patrolling. To illustrate: Shenandoah National Park reduced the number of days the Loft Mountain Visitor Contact Station operated in 2004 and then closed it entirely in 2005. This station offered the only interpretive services at the south end of the park; thus, visitors entering the park at the south end have to drive 50 miles to reach another contact station. In addition, because the park was not able to afford to fill vacancies in 2002, the park had to close all ranger programs at Mathews Arm campground in the north district (which contains 179 campsites) beginning in 2003. A park official said that as of the beginning of 2006, there continues to be no ranger programs offered at the Mathews Arm campground. Grand Teton National Park reduced the interpretive division’s staffing level that was paid out of daily operations funding, from 17 FTEs in 2001 to 12 FTEs in 2005. Because fewer staff were available, the park reduced the operating hours of the Colter Bay Visitor Center by one hour per day and reduced the number of times they offer the Tour of the Indians Art Museum and the Teton Highlights programs. At Bryce Canyon National Park, law enforcement officials told us that, since 2001, in order to maintain patrols in high-visitor-use areas, they reduced backcountry patrolling. As a result, the park has very little backcountry resource protection capability. For example, while park officials are aware of poaching in the park, they told us that they do not have the capability to prevent or investigate this illegal activity. Acadia National Park closed all seven restrooms along roads and trailheads in the park’s popular winter-use areas during the 2004-2005 winter season. Park officials told us that they chose to close the restrooms in the winter in order to have sufficient resources to keep them open in the summer. Grand Canyon National Park reduced interpretive programs available to visitors from 35 in 2001 to 23 in 2005. Zion National Park reduced cleaning of a heavily used restroom facility at a popular visitor destination from twice per day to once per day in 2004. Maintenance officials told us that, after reducing the cleaning frequency, they received several complaints about the condition of the facilities. At Gettysburg National Military Park, the Maintenance Division has lost one of its key preservation specialist positions responsible for the technical repair and restoration of cannon carriages. According to park officials, the lack of daily operations funds to hire a replacement has impaired the park's cannon carriage restoration project as the first attempt to restore carriages dating from the 1890s. The inability to fill this position has limited the restoration effort, requiring the storage of previously stripped and primed carriages in inadequate storage areas throughout the park. Most carriages will require efforts to reverse the rust damage while in storage. As a result, the estimated completion of the project increased to 15 years from 10 years. The personnel costs required for this extended time period plus the need to re-work the previously readied carriages is estimated to increase the overall costs of the project by approximately $260,000. At Yellowstone National Park, the permanent law enforcement staffing level that was paid from daily operations funding was reduced from 51 FTEs in 2001 to 45 FTEs in 2005. Park officials told us that this resulted in fewer back-and front- country patrols, and a reliance on less experienced and less trained personnel to perform these duties. At Yosemite National Park, park officials told us that, as a result of reduced funding levels, four vacant dispatcher positions can not be replaced—threatening the park’s ability to provide 911 services 7 days per week and 24 hours per day. In order to fill the key deputy chief ranger and fire chief vacancies, park officials have had to forgo re-filling several law and non-law enforcement positions. As a result, remaining staff worked overtime to perform the added responsibilities. With expected retirements, officials said that a critical branch chief position will be unfilled, as will several patrol positions and positions to staff the jail. However, the Department of the Interior stated that Yosemite National Park is working with Lassen Volcanic National Park and Whiskeytown-Shasta-Trinity National Recreation Area to provide joint services and that Yosemite is in full 911 compliance. Law enforcement officials at Acadia National Park and Grand Canyon National Park explained that after accounting for personnel costs, little is left to pay for equipment and supplies. For example, officials at Acadia National Park told us that they are unable to replace emergency response equipment, such as vehicles and boats. The park’s law enforcement division lost two patrol cars in the last three years and has been unable to replace the vehicles. Officials at the park told us that to be able to afford to replace one vehicle, they would have to forgo hiring a seasonal ranger—a position that park officials say they must maintain for the safety of park visitors and resources. At Grand Canyon National Park, 1.4 percent of the law enforcement division‘s funding for daily operations is available for law enforcement supplies and training. Officials at this park told us that this amount is not sufficient to pay for supplies such as first-aid provisions, ammunition, and bullet-proof vests. When funds allocated for daily operations were not sufficient to pay for activities that were previously paid with this source, the park units we visited reported that they deferred activities or relied on other authorized funding sources such as allocations for projects, visitor fees, donations from cooperating associations and friends groups, and concessions fees. Table 8 shows funding from other authorized sources at four of the 12 park units we visited. Tables 18 and 19 in appendix III show funding from other authorized sources for all 12 of the park units we visited. From 2001 to 2005, some parks delayed performing certain preventative maintenance activities formerly paid with allocations for daily operations until other authorized funding sources, such as project funds (including funds for cyclic maintenance, repair and rehabilitation, and visitor fees) could be found and approved. Park officials explained that, when preventative maintenance is deferred, the integrity of an asset is reduced— which can lead to replacing the asset at a greater cost than repairing it. Park Service headquarters officials told us that they are concerned about this decreased capacity and have reacted to the problem by requesting increases in project funding, such as cyclic maintenance, over the past few years. The following examples illustrate delayed activities that occurred at the park units we visited. Shenandoah National Park reduced maintenance staffing levels paid from daily operations funding from 67 FTEs in 2001 to 44 FTEs in 2005, which decreased the park’s ability to perform routine maintenance of trails and scenic overlooks. This work was traditionally considered a recurring operational activity paid for on an annual basis through funding for daily operations. In 2002, as a result of limited funding for daily operations, the park did not have the staff or resources to do this work annually and instead began performing the tasks once every 2 or 3 years. The park currently uses cyclic maintenance project funding to carry out this work and plans to use visitor fees to pay for this activity in the future. At Grand Teton National Park, officials told us that the road striping and chip sealing process—which should be performed annually to extend the life of a road 10 to 15 years—can no longer be paid with funding for daily operations. Consequently, officials told us that they have had to delay the maintenance activity and rely on less frequently available project funds. Rather than eliminating or not performing daily operational activities, some park units used volunteers and funding from authorized sources such as donations from non-profit partners and concessionaires’ fees to accomplish activities that were formerly paid with daily operations funds. Officials at several park units said that they increasingly depend on donations from cooperating associations to pay for training and equipment and rely on their staff and volunteers to provide information and educational programs to visitors that were traditionally offered by park rangers. Funds from these sources can be significant, but they are subject to change from year to year. For example, park officials explained that donations at Grand Teton fluctuated from about $188,000 in 2001 to over $400,000 in 2004, and then increased to over $8 million in 2005 when the park received a substantial gift for a new visitor center from their non- profit park partners. For the most part, funding from these sources is intended to supplement, rather than replace, daily operations funds. However, officials told us that these funds are being used to pay for activities that were formerly paid with funding for daily operations. To illustrate: Officials at Sequoia and Kings Canyon National Parks told us that 60 percent of all visitor center staffing hours in 2005 were provided by their cooperating association compared to approximately 10 percent in 2001. At Grand Canyon National Park, the interpretive division had approximately $75,000 available in daily operations funding in 2001 to pay for non-personnel costs such as travel and supplies. By 2005, approximately 99 percent of the division’s funds were spent on personnel, and the park relied on their cooperating association to pay for non-personnel costs. In 2003, Yellowstone National Park constructed The Heritage Center with line item construction appropriations to house 5.3 million artifacts of natural and cultural significance. In 2001, the park requested but did not receive $807,000 in its park’s daily operations funds to pay for the center’s operating costs. While the park absorbed an estimated utility cost of $250,000 per year, they relied on their non-profit partners—the Yellowstone Foundation and the Yellowstone Cooperative Association—to help staff, furnish, and support museum and archive acquisitions. Badlands National Park officials stated that approximately 65 percent of visitor contacts in 2004 were provided by employees of the park’s nonprofit partner—the Badlands Natural History Association— compared to 45 percent in 2001. At Grand Teton National Park and Gettysburg National Military Park, park partners are paying for the construction of a new visitor center and are creating endowments to operate the new facilities for a set number of years. In 2005, Grand Teton National Park turned over operations of five campgrounds to concessionaires. Park officials reported that by transferring these campgrounds, they reduced personnel and maintenance costs associated with operating the campgrounds. However, officials stated that a reduction in park-funded seasonal custodians has meant that fewer staff are available to clean restrooms and pick up litter. Officials said there was a noticeable increase in litter in the park in 2005. Acadia National Park’s partner, The Friends of Acadia, has supplied support in the form of funding and volunteer hours to maintain the park’s trail system. Other parks, including Grand Teton National Park and Yellowstone National Park, are considering similar options to maintain their trail systems because funding for daily operations is no longer available to cover all operational needs. Officials at several park units expressed concern about using funding from other authorized sources to address shortfalls—not only because the funds can vary from year to year, but also because these partners’ stipulations on how their donations can be used may differ from the parks’ priorities. As a result, relying on these sources for programs that require a long-term funding commitment could be problematic. For example, until 2004 the Natural Resources Division at Badlands National Park used visitor fees to pay for natural resource programs (e.g., bighorn sheep restoration and non- native plant control). However, to meet deferred maintenance spending goals, the park could no longer submit projects for approval to use visitor fee revenue to support natural resource programs. Officials at several park units also told us that, as they increasingly rely on such sources, more of their time must be spent cultivating relationships and applying for grants, rather than performing their regular duties. The Park Service identified three management initiatives that it has undertaken to address the fiscal performance and accountability of park units and to better manage within their available resources: the Business Plan Initiative (BPI), the Core Operations Analysis (COA), and the Park Scorecard. Each initiative operates independently and they are at various stages of development and implementation. In addition, the Department noted in its comments to us that there are other efforts such as the Office of Management and Budget’s analysis under the Program Assessment Rating Tool (PART) that contribute to park unit and departmental efforts to achieve more effective programs and efficient operations. Table 9 summarizes each of the three initiatives that we reviewed and their stages of implementation. Through the BPI process, park unit staff—with the help of business interns from the Student Conservation Association—identify all sources and uses of park funds to determine funding levels needed to operate and manage park units. Using this information, park unit managers develop a 5-year business plan to address any gaps between available funds and park unit operational and maintenance needs. The process used in the BPI involves 6 steps, completed over an 11-week period. Park staff and the business interns (1) identify the park unit’s mission; (2) conduct an inventory of park assets; (3) analyze park funding trends; (4) identify sources and uses of park funding; (5) analyze park operations and maintenance needs; and (6) develop a strategic business plan to address gaps between funds and park needs. The BPI began in 1997 as a result of a partnership between the Park Service and the National Parks Conservation Association. Their goals were to ensure that superintendents of park units had the knowledge and data to develop cost-reducing strategies and make a rational case for funding proposals. Yellowstone National Park completed the first business plan in 1997. Since then, about 25 percent of all park units have participated in the process. Most of the participation has come from smaller park units—those with a budget for daily operations under $2 million per year. The Park Service selects about 12 park units per year to participate in the BPI process, but their participation is voluntary. Park units are selected based on a number of factors including (1) geographic diversity, (2) unit types (e.g., national park, national historic site, national recreation area, national monument), (3) whether the park units have sufficient staffing and funding resources to conduct the BPI process, and (4) whether the timing for the park unit to conduct a BPI is appropriate. For instance, in some cases, park units selected for the BPI are subsequently unable to participate because they are undergoing major management initiatives or changes (e.g., preparing a general management plan or changing park superintendents); a park unit may also hold an event that represents an anomaly and may skew the financial condition of the park unit . For example, the Canaveral National Seashore was scheduled to complete the BPI process in fiscal year 2005 but did not due to damage to some of the park unit’s assets caused by hurricanes in 2004. All 12 of the park units we visited have completed a business plan. Many officials—both at the unit level and headquarters—stated that business plans are, among other things, useful in helping them identify future budget needs. Once completed, park managers often issue a press release to announce its completion. Park managers may also send copies to their legislators, local community councils, and park unit partners (such as cooperating associations) to communicate the results. A Park Service official stated, however, that the Park Service is still working to refine how these business plans can serve as a better tool for justifying funding needs. The COA was developed in 2004 to help park unit managers evaluate their park unit’s core mission, identify essential park unit activities and associated funding levels, and make fully informed decisions on staffing and funding. The COA is part of a broader Park Service-wide effort to integrate management tools to improve park efficiency. Park Service headquarters and regional officials and park unit staffs work together in a step-by-step process to conduct the analysis. These steps include preparing a 5-year budget cost projection (BCP) to establish baseline financial information and help project future park needs, defining core elements of the park unit’s mission, identifying park priorities, reviewing and analyzing activities and associated staff resources, and identifying efficiencies. Budget staff for each park unit first complete a 5-year BCP that uses the current year’s funding level for daily operations as a baseline, and estimates future levels, increases in non-personnel costs, and fixed costs such as salaries and benefits. The general target of the analysis is to adjust personal services and fixed costs at or below 80 percent of the unit’s funding levels for daily operations. The BCP model relies heavily on fixed costs, however the Park Service has not developed a servicewide standard definition of fixed costs so individual park units may calculate fixed costs differently. For example, fixed costs at some of the park units we visited included the costs of both personnel and utilities, whereas at other park units it only included personnel costs. As such, fixed costs used in the BCP model vary among park units. Although the COA is in the development stage, the Park Service plans to have all units complete an analysis by the end of fiscal year 2011. To achieve this goal, the Park Service will select 50 parks per year to participate. Three of the 12 park units we visited have completed (or are in the process of completing) a COA, and 3 will begin the COA in fiscal year 2006. The remaining 6 park units we visited have yet to be selected. Park unit officials told us that the preliminary results have helped them determine where efficiencies in operations might accrue. A Park Service regional official told us that the core operations process is still in its early development, noting that preliminary results are useful but too early to determine results to be realized by the park units. Park Service headquarters developed the Park Scorecard beginning in fiscal year 2004 to serve as an indicator of each park unit’s fiscal and operational condition, and managerial performance. The Scorecard is intended to provide an overarching summary of each park unit’s condition by offering a way to analyze individual park unit needs. It also provides Park Service officials with information needed to understand how park units compare to one another based on broad financial, organizational, recreational, and resource-management criteria. The Park Scorecard uses data from Park Service-wide databases already used by all park units. Park Service headquarters uses over 30 separate indicators as measures of the condition of park units. Examples of these indicators include personnel costs as a percentage of daily operations allocations, average overtime costs, the ratio of volunteer hours to total Park Service hours, operational and maintenance costs per square foot, and annual growth in visitation, to name a few. The result of the analysis using these indicators is a numerical value that is assigned to each measure leading to an assessment of being in poor, fair, good, or excellent operational condition. Although the Park Scorecard is still under development, the Park Service’s headquarters budget office used it to validate and approve requests for increases in daily operations allocations for the highest priorities among park units to be funded out of a total of $12.5 million that was provided in 2005 for daily operations directed at visitor service programs. The Park Service approved requests for funding at three out of the twelve parks we visited (Badlands National Park, Grand Teton National Park, and Yellowstone National Park). Park Service officials explained that while Park Scorecard figures can generate useful park unit comparisons, regional policies can also influence the indicators; while these numbers provide a good starting point for analysis, park unit staff input must be a consideration in determining park priorities. Park officials further explained that it is difficult to develop a set of common indicators that can be used for parks units with different characteristics, such as Yellowstone National Park and Carl Sandburg Home National Historic site. Park Service headquarters officials, with the assistance and input of park unit managers, plan on refining the Park Scorecard to more accurately capture all appropriate park measurements and to identify, evaluate, and support future budget increases for park units. The Park Service also intends for park managers to use the Park Scorecard to facilitate discussions about their needs and priorities. From 2001 through 2004, the Park Service increased allocations for support programs and project funding while placing less of an emphasis on allocations for daily operations. In 2005, however, the agency emphasis shifted toward an increase in allocations for daily operations. As evidenced by our visits to 12 park units, this later shift appears to be going in the direction needed to help the park units overcome some of the difficulties they have recently experienced in meeting operational needs—particularly as they relate to maintaining visitor services and protecting resources. In responding to these trends, park unit officials found ways to reduce spending on their allocations for daily operations and identify and use authorized sources other than these allocations to minimize some impacts on park operations and visitor services. While park units are relying more on other authorized sources to perform operations, using such funds has its drawbacks because it usually takes park units longer, with more effort from park employees, to obtain and use these sources. In the case of donations, for example, park officials spend more time grooming relationships with donors to obtain the funds. Visitor fees have been an important and significant source of funds for park units to address high- priority needs, such as reducing its maintenance backlog. However, Park Service policy prohibiting the use of visitor fees to pay salaries of permanent employees managing projects may reduce the flexibility in managing the use of funding for daily operations. While Park Service officials stated that they are embarking upon three management initiatives to improve park performance and accountability—and to better manage within available resources—it is too early to assess the effectiveness of these initiatives. To reduce some of the pressure on funding for daily operations, we are recommending that the Secretary of the Interior direct the Park Service Director to follow through in revising Park Service policy to allow park units to use visitor fee revenues to pay the costs of permanent employees administering projects funded by visitor fees to the extent authorized by law. We provided the Department of the Interior with a draft of this report for review and comment. The department provided written comments that are included in appendix V. The following represents a summary of the major comments made by the department and our response. Additional comments and our response are also provided in appendix V. With regard to our recommendation, the department stated that we should clearly state that visitor fee revenue (and not other sources) be used to fund only a limited number of permanent employees and be specifically defined for the sole purpose of executing projects funded from fee revenue. Our recommendation was specifically directed at using visitor fee revenues for paying the salaries of permanent employees who administer projects funded with such revenues and provides the Park Service with the flexibility to define how the visitor fee revenues should be applied. Accordingly, we have not modified our recommendation in response to the department’s comment. The department appreciated the diligent work of the team that prepared the report and the large amount of data collected, but had concerns that the presentation of the data in the report creates a misleading impression concerning the state of park operations for several reasons. The department said our report provided an incomplete analysis of the financial status of the park units and left the impression that park budgets have not been emphasized. We disagree with this view. We conducted a detailed analysis of the major funding trends for park operations. For example, we reported the overall funding trends for operations, including appropriations from the ONPS account, in relation to the Park Service’s total budget authority. As the report indicates, this trend showed that appropriations to the ONPS account increased overall during our study time frame at a higher rate than the Park Service’s total budget authority. We also analyzed the trends in both allocations for daily operations and projects for the park service as a whole and for each of the 12 high- visitation park units we visited. Moreover, the report showed that the fiscal year 2005 appropriation for the ONPS account included an additional $37.5 million over the amounts proposed by the House and Senate for the Operation of the National Park System account, to be used for daily operations. Furthermore, the report discusses the impacts that these trends have had on operations at the 12 parks we visited. In response to the department’s comments, we have included more examples in the report showing where project funds have been used by park units. The department also commented that within a constrained fiscal environment, park operations have been a high priority for both the Administration and the Congress. Such an analysis would require a much broader review comparing the Park Service’s budget with budgets of other federal agencies, which was beyond the scope of our review. The department commented that the report draws a “false dichotomy” between operations and project funding. Specifically, it said that the visitor experience at national parks is shaped not only by direct visitor services activities such as ranger interpretive programs, but also by the condition of park facilities and the natural resources. We agree that daily operations allocations—which funds activities such as ranger interpretive programs— and project allocations—which funds facility improvements—are both important to park operations and visitor experiences. Furthermore, we believe there is an important distinction between how park units can use daily operations allocations as opposed to allocations for projects. In fact, the Park Service itself allocates ONPS appropriations in these distinct categories. Daily operations allocations are used to pay for operating expenses such as permanent and temporary employees to perform day to day activities such as interpretive programs and cleaning restrooms. In contrast, Park Service procedures require that project-related allocations are to be used only for projects and not for day to day activities. The report recognizes this distinction by presenting these trends separately and by providing examples of how park units are using these two sources of allocations to conduct operations. The department also stated that the report’s use of several park anecdotes concerning reduced allocations for daily operations is misleading. Specifically, the department stated that the anecdotes within the report highlight only certain divisions or programs in which a park significantly reduced staffing in isolation from the park unit’s overall staffing, allocations for daily operations, and allocations for projects, as well as the overall employment levels at the Park Service as a whole. While the department noted in its comments that overall the balance of seasonal and permanent employees remained stable in 2005 compared to 2001, we found that for most of the 12 high-visitation park units we visited, that ratio of seasonals to permanent employees increased. We believe that these park specific FTE trends are better indicators of an individual park unit’s ability to maintain services at the park units than servicewide FTE trends. Analysis of activities at 12 specific park units was one of our report objectives and we continue to believe that the specific park examples adds to the report by illuminating the issues identified at the 12 park units that we visited—namely that officials at the park units reported that their daily operations allocations have not kept pace with increasing personnel costs, rising utility costs, and increased responsibilities. We provided examples of the tradeoffs park managers made to manage within their available resources that illustrate what park managers consistently told us about their ability to maintain park operations such as visitor service levels. In addition, we provided overall FTE trends for the park units we visited, including those FTEs paid with allocations for daily operations and those paid with other authorized sources. These trends show that most of the park units are increasingly relying on sources other than daily operations allocations to maintain FTE levels. In addition, the department said that the report relies on the use of budget and financial data but does not examine performance information, the trends in accomplishments, or efforts to improve service delivery over the time period of our study. Specifically, it mentioned the Park Service’s and the administration’s measurement of performance and related cost information, the analysis of allocations for daily operations through the PART process, and efforts in management excellence. It said that all of these efforts, including Park Service-specific tools such as the Core Operations Analysis are yielding results in achieving more effective programs and more efficient operations. In addition, the department states that the Park Service has adopted new ways of doing business including centralizing some services and systems under the department. Specifically, the comments describe a department-wide effort to purchase information technology hardware and software and other consumables, as well as Park Service efforts to limit travel, provide more efficient training, and use volunteers. We added additional information to the report to reflect these efforts. As recognized by the department, the report provides information on the major management initiatives that the Park Service has undertaken, such as COA, BPI, and the Park Scorecard, which are designed to assist managers to develop fully informed decisions which direct park resources toward functions that are essential to achieving mission goals and also serve as a part of management planning efforts. With regard to the department’s comment regarding accomplishments, we point out that for the most part, the initiatives underway were in their early stages of development and it was too soon to determine results. We did however, identify several examples of how park managers at the parks we visited reported that they are increasingly relying on volunteers to perform activities that were previously funded through allocations for daily operations and their efforts to limit travel and training, among other expenses, to reduce impacts on visitor services. Finally, the department commented that, although there is not a perfect inflation adjustment index available to accurately determine an index of Park Service operating costs, the specific price index we used for deflating Park Service funding and operating costs—Gross Domestic Product (GDP) Price Index for Government Consumption Expenditures and Gross Investment (federal nondefense sector)—measures changes in the value of government output using the cost of inputs such as compensation of employees. The department said that it believes it might be more appropriate to use the GDP (Chained) Price Index because it is based upon costs of goods and services in the marketplace and it therefore considers productivity and other management enhancements; the department also said that this broader price index is not a perfect index either. The department added that using the broader index would provide significantly different results; that is, the inflation-adjusted trends in funding for daily operations would generally be more positive. We agree that there is not a perfect index available to accurately determine an index of Park Service operating costs, and we agree that using a broader index would yield different results. Nonetheless, we believe that using the GDP Price Index for Government Consumption Expenditures and Gross Investment (federal nondefense sector) better represents the real quantity of services that the agency’s budget provides over time. In general, when removing the effects of price changes, it is preferable to use a specific price index that matches the composition of the nominal dollar amounts under consideration. As we noted in the report, this price index reflects changes in the value of government output, as measured by the cost of inputs such as compensation of employees and purchases of goods and services. Input costs are used in constructing the index because most government output is not sold in the market place. For the Park Service, most of the operating costs consist of employee compensation. As a result, the specific price index we used assigns greater weight to changes in federal workers’ compensation than does the more general GDP (Chained) Price Index. While the GDP (Chained) Price Index reflects productivity improvements in the overall economy, it is partly based on input costs and a large portion of the basket of goods it represents reflects personal consumption, including food, clothing, and housing, which are less relevant for assessing real trends in the Park Service’s operating costs. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretary of the Interior and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have questions about this report, please contact me at (202) 512-3841 or nazzaror@gao.gov. Contact points for our offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VI. This appendix presents the methods we used to gather information on National Park Service (Park Service) funding trends, their impacts on selected park units, and management initiatives under way to address fiscal performance and accountability. To identify funding trends for Park Service operations and visitor fees from fiscal years 2001 through 2005, we obtained and analyzed appropriations legislation, including appropriations for the Operation of the National Park System account (consisting of funding for daily operations, projects, and other support programs), and visitor fees. We analyzed the data in both nominal (actual) and real (adjusted for inflation) terms. To remove the effects of inflation, we adjusted nominal dollars using the Gross Domestic Product (GDP) Price Index for Government Consumption Expenditures and Gross Investment (federal nondefense sector), with 2001 as the base year. The price index reflects changes in the value of government output, measured by the cost of inputs, including compensation of employees and purchases of goods and services. Consistent with the proportion of the Park Service’s operating expenditures on personnel, this price index is more heavily weighted by changes in federal workers’ compensation than the overall GDP price index. We gathered funding data from the Park Service Budget Office on allocations from the ONPS account for daily operations, projects, and other support programs. In addition to obtaining data on allocations for daily operations on a servicewide level, we also gathered data on the allocations for daily operations for individual park units to determine how many have received operating increases or decreases, and how many have remained relatively constant. We also obtained data on recreation visits from the Park Service’s Public Use Statistics Office for park units to analyze allocations for daily operations in relation to visitation rates. We also interviewed agency officials at Park Service Headquarters, the Pacific West Region, the Intermountain Region, and individual park units in addition to those listed below, including Mount Rainier National Park, Olympic National Park, Point Reyes National Seashore, and the San Francisco Maritime National Historical Park. We assessed the reliability of the data by reviewing the methods of data collection for relevant Park Service databases. We determined that the data were sufficiently reliable for the uses in this report. To determine the funding trends for certain individual park units and how the trends affected their ability to provide services to visitors, we collected and analyzed data and reviewed operational impacts at the nonprobability sample of 12 park units visited; we also interviewed park unit officials about their funding trends, operational impacts, and policy requirements. The 12 park units represent a cross-section of high-visitation parks (greater than 500,000 visits per year) with potentially a large number of visitor services, regional diversity, and a range of allocations for daily operations. In addition, based on preliminary figures, we sought a cross-section of parks that had sustained varying levels of growth in their allocations for daily operations. Table 10 lists the 12 parks we visited, their primary features, and their location. For each of the 12 park units, we collected data on funding trends, and park operations including visitor services. We collected park data on budget formulation, budget allocation, expenditures, and staffing trends. We sent uniform data requests to the 12 park units, provided uniform guidance and interactively worked with park officials to compile the data. We also obtained information on operations such as (1) visitor and resource protection (e.g. law enforcement rangers), (2) facilities operation and maintenance (e.g. opening a campground or a visitor center and maintaining a building or trail), (3) resource management (e.g. monitoring the condition of threatened species or water quality), (4) interpretation and education (e.g. interpretive rangers to provide educational programs), and (5) park administration and support (e.g. updating computer systems or attending training). Each of these operational areas has some role in providing visitor services. We assessed the reliability of the data by reviewing the methods of data collection for relevant Park Service databases. We determined that the data are sufficiently reliable for the purposes of this report. To identify recent management initiatives the Park Service has under way to address fiscal performance and accountability for fiscal years 2001 to 2005, we gathered and reviewed documentation on several management initiatives including the Business Plan Initiative, the Core Operations Analysis, and the Park Scorecard. For the Business Plan initiative, we interviewed park service officials at headquarters and individual park units on the content of the analysis, procedures, and final plans. For the Core Operations Analysis, we interviewed park officials in the Intermountain Region and at individual park units that are in the process of performing the analysis including Grand Canyon National Park, and Yellowstone National Park. For the Park Scorecard, we reviewed documentation and interviewed Park Service Headquarters officials on the development and implementation of the initiative. We conducted our work from January 2005 to March 2006 in accordance with generally accepted government auditing standards. Tables 11 and 12 show trends in appropriations in both nominal and inflation-adjusted terms for the Operation of the National Park System Account, including allocations for daily operations and support programs. In addition, the tables show the trends for visitor fees collected by the Park Service from fiscal years 2001 through 2005. The following tables summarize the data collected from 12 selected park units including 2001 through 2005 total park unit labor expenditures; personnel levels by funding source; employee and labor cost per retirement system (CSRS and FERS); and funding levels by other funding source types. Table 20 shows, for fiscal years 2001 through 2005, recreation visitation trends for the 12 selected park units we visited compared to the entire Park Service. 1. We agree that the overall 2005 ONPS account was $1.7 billion—an increase of about 21 percent higher than in 2001. We reported this increase on an average annual basis of about 4.9 percent per year from 2001 through 2005, which is equivalent to about 21 percent from 2001 to 2005 in nominal terms. In addition, we added information on the department’s comment that the Park Service has received significant operating increases since 1980, particularly compared to other domestic agencies. 2. For the park units we visited, we provided data and analysis on the major funding trends for the park units, namely, allocations for daily operations, project related allocations, visitor fees, concessions fees and others. We added examples of specific project allocations to the park units we visited and how they were used as reported by the park units. 3. On page 12 of the report, we provided information on the park service’s overall budget authority. In addition, we agree that the allocations for daily operations increased by about 14 percent from 2001 through 2005. However, we believe it is also important to look at the change in inflation-adjusted terms. We believe the information we provided in the report fairly describes the emphasis placed by the Congress and the Administration on Park Service operations over our 5-year study time frame. 4. According to the Department of the Interior, the allocation for daily operations increased more in dollar terms than any other Park Service program between 2001 and 2005. However, on an average annual basis, the percentage increase over this period was less than for other programs. In addition, after adjusting for inflation, the allocation for daily operations fell slightly from about $903 million in 2001 to about $893 million in 2005—an average annual decline of about $2.5 million, or 0.3 percent. 5. We disagree with the assertion that our analysis presents a “false dichotomy between operations and project funding.” This is addressed more fully on pages 46 and 47. 6. On page 19 of the report, we include allocations for cyclic maintenance, repair and rehabilitation, and the inventory and monitoring programs from fiscal years 2001 through 2005. We believe this reflects the Park Service’s continued emphasis on efforts to reduce its deferred maintenance backlog and the monitoring and protection of the natural resources in its charge. 7. The report provides the allocation trends for existing programs such as the Inventory and Monitoring program, which is a large component of the Natural Resource Challenge. To provide additional information on this effort, we added information in the report on the total allocations from fiscal years 2001 through 2005—$62 million in nominal dollars. We also added examples of specific projects at park units we visited, some of which were funded through project allocations under the Natural Resource Challenge. 8. See comment 1 above and the table attached to the department’s comments. 9. Although analyzing Park Service spending per visit is an indicator, we believe such analysis is of limited use because it does not indicate how the expenditures are used. 10. See page 47 for our response. In addition, we used examples from park unit divisions that we visited in an effort to illustrate specific impacts on park operations. As the department pointed out, Grand Teton National Park’s, overall FTE data indicates that seasonal employees increased from 54 to 73 from 2001 through 2005. However, this increase was mostly due to additional seasonal employees that were hired with other authorized funding sources—from 17 to 46. The seasonal FTEs paid for through daily operations allocations, in fact, decreased from 37 to 28. Employees paid for through project-related allocations are hired to conduct work on specific projects, while those funded through daily operations allocations can be used more flexibly within a division to carry out operational activities such as cleaning restrooms and picking- up litter. 11. We agree that operational funding is one of several factors that contribute to employment levels at individual park units. Because management at the park unit level has discretion to manage within available resources, we asked park unit officials to report the level of FTEs funded per division, per funding source, and per employee type. In this way, we were better able to substantiate the anecdotes we chose to use in the report and to determine the parks’ staffing composition. For example, at Grand Teton National Park, the number of permanent FTEs funded through daily operations allocations, from 2001 through 2005 decreased by 2, while those funded through project allocations and other authorized funding sources increased by 16. 12. See page 47 which discusses our response to this comment. 13. We noted these additional non-park specific efforts on page 40 of the report. 14. We agree that management decisions are made within a dynamic environment of shifting priorities and resources. The specific examples we provide highlight projects and activities that were accomplished, or were not accomplished given the resources available to individual park units. We agree that the Park Service has worked to accommodate the impact of pay increases and across-the-board reductions; however, we did not study the level of visitor satisfaction throughout this time frame. Many of the park unit officials we spoke with explained that in an effort to manage within available resources, certain activities that directly affect the visitor can no longer be provided for with daily operation allocations. The activities must then either be reduced, eliminated or paid for using other authorized funding sources. For instance, we found that some activities traditionally provided by a Park Service employee, were now being provided by volunteers. 15. See page 48, which discusses our response to this comment. 16. At the time we visited Zion National Park, it had not yet completed it’s COA. Since they completed their analysis, we have not had the opportunity to validate the department’s claim that Zion National Park achieved an overtime and premium pay for savings of $30,000 as a result of the COA. 17. We added additional information on page 40 of the report to reflect these efforts. 18. See pages 48 and 49, which discuss our response to this comment. 19. See pages 48 and 49, which discuss our response to this comment. 20. We added additional information in the report to address this comment. 21. See pages 45 and 46, which discusses our response to this comment. In addition to the individual named above, Roy Judy, Assistant Director, Thomas Armstrong, Jay Berman, Ulana Bihun, Denise Fantone, Doreen Feldman, Tim Guinane, Susan Irving, Richard Johnson, Hannah Laufe, Alison O’Neill, Claudine Pauselli, Jamie Roberts, Patrick Sigl, Paul Staley, and Walter Vance made key contributions to this report. National Park Service: Efforts Underway to Address Its Maintenance Backlog. GAO-03-1177T. Washington, D.C.: September 27, 2003. National Park Service: Status of Agency Efforts to Address Its Maintenance Backlog. GAO-03-992T. Washington, D.C.: July 8, 2003. National Park Service: Status of Efforts to Develop Better Deferred Maintenance Data. GAO-02-568R. Washington, D.C.: April 12, 2002. National Park Service: Efforts to Identify and Manage the Maintenance Backlog. GAO/RCED-98-143. Washington, D.C.: May 14, 1998. National Park Service: Maintenance Backlog Issues. GAO/T-RCED-98-61. Washington, D.C.: February 4, 1998. Recreation Fees: Comments on the Federal Lands Recreation Enhancement Act, H.R. 3283. GAO-04-745T. Washington, D.C.: May 6, 2004. Recreation Fees: Management Improvements Can Help the Demonstration Program Enhance Visitor Services. GAO-02-10. Washington, D.C.: November 26, 2001. National Park Service: Recreational Fee Demonstration Program Spending Priorities. GAO/RCED-00-37R. Washington, D.C.: November 18, 1999. Recreation Fees: Demonstration Has Increased Revenues, but Impact on Park Service Backlog Is Uncertain. GAO/T-RCED-99-101. Washington, D.C.: March 3, 1999. Recreation Fees: Demonstration Program Successful in Raising Revenues but Could Be Improved. GAO/T-RCED-99-77. Washington, D.C.: February 4, 1999. Recreation Fees: Demonstration Fee Program Successful in Raising Revenues but Could Be Improved. GAO/RCED-99-7. Washington, D.C.: November 20, 1998. Wildlife Management: Negotiations on a Long-Term Plan for Managing Yellowstone Bison Still Ongoing. GAO/RCED-00-7. Washington, D.C.: November 30, 1999. National Park Service: Efforts to Link Resources to Results Suggest Insights for Other Agencies. AIMD-98-113. Washington, D.C.: April 10, 1998. Wildlife Management: Issues Concerning the Management of Bison and Elk Herds in Yellowstone National Park. GAO/T-RCED-97-200. Washington, D.C.: July 10, 1997. National Parks: Park Service Needs Better Information to Preserve and Protect Resources. GAO/T-RCED-97-76. Washington, D.C.: February 27, 1997. National Park Service: Activities Within Park Borders Have Caused Damage to Resources. GAO/RCED-96-202. Washington, D.C.: August 23, 1996. National Park Foundation: Better Communication of Roles and Responsibilities Is Needed to Strengthen Partnership with the National Park Service. GAO-04-541. Washington, D.C.: May 17, 2004. Park Service: Agency Needs to Better Manage the Increasing Role of Nonprofit Partners. GAO-03-585. Washington, D.C.: July 18, 2003. Park Service: Need to Address Key Management Problems That Plague the Concessions Program. GAO/T-RCED-00-136. Washington, D.C.: June 15, 2000. Park Service: Need to Address Management Problems That Plague the Concessions Program. GAO/T-RCED-00-188. Washington, D.C.: May 24, 2000. Park Service: Need to Address Management Problems That Plague the Concessions Program. GAO/RCED-00-70. Washington, D.C.: March 31, 2000. National Park Service: Concession Reform Issues. GAO/T-RCED-98-122. Washington, D.C.: March 12, 1998. Federal Lands: Concession Reform is Needed. GAO/T-RCED/GGD-96-223. Washington, D.C.: July 18, 1996. National Park Service: Concerns About the Implementation of Its Employee Housing Policy. GAO/T-RCED-99-119. Washington, D.C.: March 17, 1999. National Park Service: Employee Housing Issues. GAO/T-RCED-98-35. Washington, D.C.: October 29, 1997. National Park Service: Opportunities Exist to Clarify and Strengthen Special Uses Permit Guidance on Setting Grazing Fees and Cost- Recovery. GAO-06-355R. Washington, D.C.: February 9, 2006. National Park Service: Revenues Could Increase by Charging Allowed Fees for Some Special Uses Permits. GAO-05-410. Washington, D.C.: May 6, 2005. National Park Service: Managed Properties in the District of Columbia. GAO-05-378. Washington, D.C.: April 15, 2005. National Park Service: A More Systematic Process for Establishing National Heritage Areas and Actions to Improve Their Accountability Are Needed. GAO-04-593T. Washington, D.C.: March 30, 2004. National Park Service: Actions Needed to Improve Travel Cost Management. GAO-03-354. Washington, D.C.: February 13, 2003. National Park Service: Opportunities to Improve the Administration of the Alternative Transportation Program. GAO-03-166R. Washington, D.C.: November 15, 2002. Park Service: Visitor Center Project Costs, Size, and Functions Vary Widely. GAO-01-781. Washington, D.C.: July 24, 2001. Park Service: Agency Is Not Meeting Its Structural Fire Safety Responsibilities. GAO/T-RCED-00-253. Washington, D.C.: July 19, 2000. National Park Service: Flood Recovery Efforts at Yosemite National Park, California. GAO/RCED-99-50R. Washington, D.C.: January 27, 1999. National Park Service: Efforts to Link Resources to Results Suggest Insights for Other Agencies. AIMD-98-113. Washington, D.C.: April 10, 1998. Park Service: Managing for Results Could Strengthen Accountability. GAO/RCED-97-125. Washington, D.C.: April 10, 1997. Land Management Agencies: Information on Selected Administrative Policies and Practices. GAO/RCED-97-40. Washington, D.C.: February 11, 1997. National Parks: Difficult Choices Need to Be Made About the Future of the Parks. GAO/RCED-95-238. Washington, D.C.: August 30, 1995. National Park Service: Difficult Choices Need to Be Made on the Future of the Parks. T- GAO/RCED-95-124. Washington, D.C.: March 7, 1995. | In recent years, some reports prepared by advocacy groups have raised issues concerning the adequacy of the Park Service's financial resources needed to effectively operate the park units. GAO was asked to identify (1) funding trends for Park Service operations and visitor fees for fiscal years 2001-2005; (2) specific funding trends for 12 selected high visitation park units and how, if at all, the funding trends have affected operations; and (3) recent management initiatives the Park Service has undertaken to address fiscal performance and accountability of park units. Overall, amounts appropriated to the National Park Service (Park Service) in the Operation of the National Park System account increased from 2001 to 2005. In inflation-adjusted terms, amounts allocated by the Park Service to park units from this appropriation for daily operations declined while project-related allocations increased. Project-related allocations increased primarily in (1) cyclic maintenance and repair and rehabilitation programs to reflect an emphasis on reducing the estimated $5 billion maintenance backlog and (2) the inventory and monitoring program to protect natural resources through the Natural Resource Challenge initiative. Also, on an average annual basis, visitor fees collected increased about 1 percent, a 2 percent decline when adjusted for inflation. All park units we visited received project-related allocations but most of the park units experienced declines in inflation-adjusted terms in their allocations for daily operations. Each of the 12 park units reported their daily operations allocations were not sufficient to address increases in operating costs, such as salaries and new Park Service requirements. In response, officials reported that they either eliminated or reduced services, or relied on other authorized sources to pay operating expenses that have historically been paid with allocations for daily operations. Also, implementing important Park Service policies, without additional allocations, has placed additional demands on the park units and reduced their flexibility. For example, the Park Service has directed its park units to spend most of their visitor fees on deferred maintenance projects. While the Park Service may use visitor fees to pay salaries for permanent staff that administer projects funded with these fees, it has a policy prohibiting such use. To alleviate the pressure on daily operations allocations, we believe it would be appropriate to use visitor fees to pay the salaries of employees working on visitor fee-funded projects. Interior believes that while employment levels at individual park units may have fluctuated for many reasons, employment servicewide was stable, including both seasonal and permanent employees. GAO identified three initiatives--Business Plan, Core Operations Analysis, and Park Scorecard--to address park units' fiscal performance and operational condition. Of the park units we visited with a business plan, officials stated that the plans, among other things, have helped them better identify future budget needs. Due to its early development stage, only a few park units have participated in the Core Operations Analysis; for those we visited who have, officials said that they are better able to determine where operational efficiencies might accrue. Park Service headquarters used the Scorecard to validate and approve increases in funding for daily operations for fiscal year 2005. |
Formularies are used to help control pharmacy costs, enhance patient safety, and improve quality of care by, among other things, limiting drug choices to those a health care organization has determined are the most medically appropriate and cost-effective for a given patient population. As early as 1955, VA medical centers began using formularies, and at the time, each medical center maintained its own formulary at the local level. In September 1995, VA created a centralized group to manage its pharmacy benefit nationwide, now called PBM, and soon began the process of moving to a single, national formulary. As an interim step, VA established regional formularies operated by each VISN. On June 1, 1997, VA implemented the national formulary to help standardize veterans’ access to care across the country, though the medical centers and VISNs continued to maintain their own local and regional formularies. In 2001, VA abolished medical center formularies and in 2009, VA eliminated VISN formularies. Discussion of the elimination of the regional VISN formularies began in 2005. In 2006, VA reviewed all drugs on VISN formularies that were not on the national formulary to determine if these drugs should be considered for inclusion on the national formulary. According to officials from VA’s PBM, in cases where VA decided not to add a drug from a VISN formulary to the national formulary, VISN and medical center staff were given 6 months to appeal the decision. In the end, 91 drugs from VISN formularies were added to the national formulary. Around the same time that this review was taking place, VISNs were asked to stop adding new drugs to their formularies, unless the drugs were also being added at the national level. While VISNs and medical centers no longer maintain their own formularies, VA’s decentralized approach to developing VistA means that each medical center is still responsible for maintaining a local drug file that matches VA’s national drug file. In addition to maintaining a local drug file, each medical center decides whether and how it will customize other VistA applications onsite. VistA contains over a hundred separate computer applications, including the Computerized Patient Record System (CPRS). VA providers can use CPRS to review and update patient medical records and to place electronic orders for medications, procedures, and tests. As part of its responsibilities for managing VA’s pharmacy benefit at the national level, VA’s PBM updates the national formulary listing, maintains databases that track drug use, and reviews data on nonformulary drug requests that it requires each VISN to report quarterly. PBM clinicians are responsible for maintaining a clinical portfolio on drugs for certain diseases, and are expected to continuously review information on new drugs that are relevant to their portfolio, as well as stay current on information on existing drugs. When appropriate, these clinicians will initiate a drug review. PBM works with MAP and the VPE Committee to conduct reviews of drugs for its national formulary, including the review of drugs approved by FDA for use on the market. In addition to working with VA’s PBM on the national-level VPE Committee, each VPE works in conjunction with its VISN formulary committee at the regional level to provide oversight and guidance for national formulary management activities for the medical centers within the network. In turn, at the medical center level, each chief of pharmacy works with the local P&T committee to implement national formulary decisions and ensure compliance with these decisions. (See fig. 1.) Although nearly all drugs that VA providers prescribe are on the national formulary, in some cases, providers determine that it is clinically necessary to prescribe nonformulary drugs. VA monitors the prescription of nonformulary drugs to ensure appropriate use and accordingly, each VA medical center must have a nonformulary drug request process. While VISNs and medical centers are responsible for implementing the nonformulary drug request process, VA has outlined certain requirements within its formulary management handbook. The handbook states that, at the local level, each VA medical center is responsible for establishing a process to adjudicate nonformulary drug requests that ensures decisions are evidence-based in accordance with certain prescribing criteria. VA also requires that medical centers adjudicate nonformulary drug requests within 96 hours. Each medical center chief of staff is responsible for establishing a system to address any provider-initiated appeals of denied nonformulary drug requests. At the regional level, VISNs are responsible for ensuring that medical centers have a nonformulary drug request process in place. Each VISN is also responsible for establishing a process to analyze nonformulary drug request data at the VISN and medical center levels to determine if the process is implemented appropriately and effectively in medical centers, and report these data to VA’s PBM on a quarterly basis. Reported information must include the numbers of nonformulary drug requests received, approved, and denied as well as the average time taken to adjudicate completed requests. In addition to requirements for the nonformulary drug request process, the handbook requires that VISNs ensure that local forums exist where formulary issues can be discussed with veterans service organizations (VSOs) on an ongoing basis. While VSO meetings may also be held at the regional and national level, there is no requirement that these meetings organize specifically for the purposes of discussing formulary issues. We previously reported on the national formulary in December 1999 and in January 2001. We found that veterans had access to needed medications, but VA needed to improve its oversight activities. In our 2001 report, we recommended that VA take steps to better ensure that VISNs and medical centers comply with the national formulary and nonformulary drug request policies and procedures. VA responded to these recommendations by, among other things, having its PBM check drug utilization data—which tracks drugs dispensed across VA—for outliers and requiring that nonformulary drug requests be adjudicated within 96 hours. VA uses a standardized process to review drugs for its national formulary that is coordinated at the national level by its PBM. PBM’s Chief Consultant told us that most reviews are initiated in response to FDA’s approval of drugs for use on the market. To begin reviewing a drug for inclusion on the national formulary, clinicians from PBM develop evidence-based drug monographs that include information on safety, efficacy, and cost, and seek comments on these monographs from VISN and medical center staff. Completed monographs are then reviewed by MAP and the VPE Committee, who vote on whether to add the drug to the national formulary. A majority of the drugs VA considered for addition to the national formulary in 2008 and 2009 were reviewed within one year of FDA approval, but there were various factors that caused some reviews to take longer. VA’s drug review process is coordinated at the national level by its PBM, whose Chief Consultant told us that most reviews are initiated following FDA’s approval of a drug for use on the market to determine whether to add the drug to the national formulary. While there are different types of FDA approvals, PBM’s Chief Consultant said that most drug reviews are triggered by FDA approval of a new drug. PBM also initiates drug reviews to consider whether to remove a drug from the national formulary, such as in response to the emergence of new safety issues. Additionally, PBM officials said that they may decide to conduct a drug class review to determine whether there is superiority of one or more drugs in a class, or if the drugs are equivalent in terms of safety and efficacy. Such reviews are undertaken when VA is considering negotiating a drug contract or to determine a drug’s place in therapy relative to other drugs in its class. According to VA officials, medical center staff can submit requests to their local P&T committees to review a drug for addition to or removal from the national formulary. P&T committees review and forward approved requests to their regional VISN formulary committees. If VISN formulary committees review and approve requests, they forward them to VA’s PBM for consideration at the national level. In 2009, VISNs submitted 13 requests for drug reviews, and while 2 of the requests were later withdrawn, PBM approved all of the requests for national review. VA uses a standardized process to review drugs for inclusion on its national formulary, which begins with a clinician from its PBM researching relevant literature to develop an evidence-based drug monograph. Each drug monograph includes the clinician’s research methodology, safety and efficacy tables, and data on cost. Further, the clinician may consult with VA subject matter experts to assist with the development of monographs when necessary. Once a draft of a monograph is ready, the clinician forwards it to the VPEs and requests that the document be disseminated to VISN and medical center staff, including physicians and pharmacists, for comment. Generally, within a period of 2 to 4 weeks, comments about the monograph are returned to the PBM clinician. The clinician compiles and reviews these comments, and incorporates any changes deemed appropriate to the monograph. Once VA’s PBM has completed a drug monograph, MAP and the VPE Committee review PBM’s findings and vote whether to add the drug to the national formulary based on an assessment of the drug’s safety, efficacy, and cost as well as its relevance to the veteran population. While most members of MAP and the VPE Committee are VA staff, a clinical representative from DOD participates in the MAP and VPE Committee meetings and votes on MAP decisions. A number of MAP and VPE Committee members we interviewed told us that they consider a drug’s safety and efficacy before they consider cost when reviewing a monograph. Most members also said that the two groups typically agree on national formulary decisions, but that when disagreements occur, they usually stem from operational issues, such as establishing process guidelines for ordering a drug within VA. In the event of a disagreement, VA’s policy is that final decisions rest with MAP. MAP and the VPE Committee may also recommend that restrictions or criteria for use be developed to better ensure a drug’s appropriate use. Criteria for use are reviewed by MAP and the VPE Committee and then sent to VISN and medical center staff for comment. Once comments are received, members vote to approve the final document. Officials from VA’s PBM told us that MAP and the VPE Committee are also authorized to classify a drug as “no buy” for purposes of prohibiting its use in cases where there are serious safety concerns in a population similar to the VA population. However, as of April 2010, there were no drugs on the national “no-buy” list. In addition, PBM has developed national guidance to improve the safety of “off-label” prescribing, which occurs when providers prescribe drugs for indications other than those FDA has approved. While PBM authorizes its providers to prescribe drugs “off-label,” it recommends that providers use an evidence-based approach and follow protocols established by their local P&T committee. Figure 2 illustrates VA’s drug review process. After MAP and the VPE Committee make national formulary decisions, VA’s PBM updates the national drug file. VISN formulary committees communicate national formulary decisions to medical center P&T committees. P&T committees subsequently inform medical center staff of these decisions. Pharmacy IT staff at each medical center update the local drug file by matching the local drug file to the drug’s code at the national level. PBM officials we interviewed stated that it is more difficult for some medical centers to update local drug files than others, generally due to IT staffing resources. In 2008 and 2009, VA considered 61 drugs for inclusion on the national formulary. Of those, MAP and the VPE Committee voted to add 11 drugs to the national formulary and to approve 50 drugs for nonformulary use. In addition, MAP and the VPE Committee voted to add either restrictions or criteria for use to 25 of these drugs to ensure they were used appropriately. According to officials from VA’s PBM, MAP and the VPE Committee made the 50 drugs nonformulary for reasons including (1) they determined the drug under review offered no significant benefit over national formulary alternatives already available, (2) they determined that the drug would have limited use for the veteran population, or (3) they had concerns about ensuring the drug’s safe and appropriate use, and therefore required prospective review. The time it takes VA to review a drug varies and is primarily determined by whether there are factors that complicate the drug review process, such as safety concerns, and the drug’s priority status. Of the 61 drugs that VA considered for addition to the national formulary during 2008 and 2009, information provided by VA’s PBM indicates that 35 reviews were concluded within one year of the time FDA approved the drug and an additional 17 reviews were completed within 2 years of FDA approval. The remaining 9 reviews were completed more than 2 years after the FDA approval, with 4 of these reviews taking 3 to 5 years to complete. PBM officials reported a number of reasons why some of these reviews took longer than others. In some cases, safety concerns necessitated the development of criteria for use, which delayed the drug review process. For example, one drug, approved by FDA for the treatment of a rare blood disease, was determined to potentially increase a patient’s risk of infection and took VA 18 months to review. Officials said that developing criteria for use increased review time because the criteria were complicated and required consultation with a hematologist. In other cases, reviews were delayed because there was a lack of reputable information, such as studies published in peer-reviewed journals. Additionally, drug reviews took longer when alternative drugs were already available on the national formulary and PBM decided to conduct a drug class review. PBM officials told us that drug class reviews can take twice as long as the review of an individual drug because they involve compiling information on multiple drugs. Officials from VA’s PBM told us they experience a backlog of drugs to review because there are always more potential reviews than they can accommodate, and thus, they review high-priority drugs first. Additionally, the officials said they have implemented strategies to alleviate the drug review workload, such as soliciting assistance from VISN and medical center staff to prepare and present drug monographs, and conducting abbreviated drug reviews when appropriate. Some VPEs we spoke with said that they can talk to PBM clinicians about moving a drug up on the priority list if necessary. In addition to the 61 drugs VA considered for addition to the national formulary in 2008 and 2009, we examined new drugs approved by FDA in 2008 and 2009, and the progress VA made in reviewing them. (See table 1.) According to information provided by VA’s PBM, of the 52 new drugs FDA approved for use on the market in 2008 and 2009, VA either reviewed, or was in the process of reviewing, 38 of them as of March 2010. Reviews of the remaining 14 drugs were pending, since VA categorized these drugs as a lower priority for reasons such as there being a viable alternative drug on the national formulary or the drug having limited use for the veteran population. Although drug review times vary, if providers determine that it is clinically necessary, veterans may be able to access a drug before a national review is complete. Officials from VA’s PBM told us that, due to the length of time it takes for PBM to conduct a drug review, VISNs and medical centers may develop interim guidance for reviewing and approving nonformulary requests for a drug not yet reviewed at the national level. The officials said that the VISN or medical center creating interim guidance could develop drug monographs and criteria for use for the purpose of evaluating nonformulary drug requests. PBM officials also said that they neither encouraged nor discouraged this practice, and think it is common among VISNs and medical centers. Officials from one medical center we interviewed told us that they conduct reviews of drugs that have not yet been reviewed at the national level, and that if they approve a drug for nonformulary use, they typically develop local restrictions or criteria for use until national guidance is issued. Officials from another medical center stated that they would not decide whether to permit nonformulary use of drugs that have not been reviewed by MAP and the VPE Committee, but noted that if a veteran urgently needed one of these medications, they would forward the request to the VISN. VISNs and medical centers vary in approaches to implementing the nonformulary drug request process, including how they adjudicate nonformulary drug requests, collect and report required data to VA’s PBM, and address appeals of denied requests. We found that IT enhancements could help facilitate more consistent implementation of the process. Although VA intends to replace its pharmacy IT system, it is uncertain whether changes that would support the nonformulary drug request process will be implemented. The process for adjudicating nonformulary drug requests varies among medical centers, in part due to differences in local IT resources. Most medical centers use CPRS to electronically process nonformulary drug requests, though providers can also make requests outside of the system either through submitting paper-based requests or contacting adjudicating officials directly to verbally request nonformulary drugs. Further, the extent to which medical centers can automate CPRS depends on the availability of onsite IT expertise. Some medical centers, for example, are able to create drug-specific order templates in CPRS for nonformulary drugs. Officials from VA’s PBM told us that these templates are interactive and prompt providers through criteria checks to ensure appropriate use. If criteria are met, the drug is automatically submitted for ordering. Although this method further automates the nonformulary drug request process and better ensures that information about the drug is easily accessible to providers, some VPEs told us that it can be challenging from an IT perspective and that not all medical centers have the IT resources needed to create order templates. One VPE told us that the VISN has created order templates so that medical centers with more limited IT resources can use them, and another VPE said that the VISN would like to do this. While some medical centers are able to create drug-specific order templates, most VPEs and medical center officials whom we interviewed told us that CPRS is used to create electronic nonformulary drug request forms, which providers submit to a pharmacist for adjudication. The format of these request forms can vary. For example, some may be used just for nonformulary drug requests, while others may be used more broadly to request both national formulary drugs that have restrictions and nonformulary drugs. Additionally, some nonformulary drug request forms may be populated with drug-specific information, while others require providers to fill-in information for requested drugs. Officials from two of the four medical centers whom we interviewed cited challenges with using nonformulary drug request forms. For example, officials from one medical center told us that due to the way the nonformulary drug request form is designed, providers may not realize how to access information needed to justify their requests and subsequently have them denied. VISNs vary in the processes they use to collect required nonformulary drug request data and report these data at the VISN and medical center levels to VA’s PBM on a quarterly basis. VPEs from 18 of the 21 VISNs told us they collect required data—which include the numbers of nonformulary drug requests received, approved, and denied, as well as the average time taken to adjudicate completed requests— from their medical centers and report them to PBM, while 3 VPEs said that they instruct medical centers to report nonformulary drug request data directly to PBM. VPEs from the 18 VISNs obtain nonformulary drug request data in a variety of ways, such as extracting data from shared databases, or requiring medical center staff to complete spreadsheets or input data into an internal VISN Web site. Medical centers have established different processes for addressing provider-initiated appeals of denied nonformulary drug requests, and one VISN has centralized the appeals process. The VPEs we interviewed stated that medical centers rely on different personnel to adjudicate appeals of denied nonformulary drug requests, such as the chief of staff, the chief of pharmacy, the P&T committee chair, or the entire P&T committee. Furthermore, the appeals process may involve several layers of review. For example, officials from one medical center explained that appeals are first routed to a pharmacy supervisor. If the pharmacy supervisor also denies the nonformulary drug request, it is forwarded to the chief of medicine for review. If the chief of medicine denies the request, the provider can make a final appeal to the chief of staff. Some VPEs also said that VISN chief medical officers and formulary committees may become involved in adjudicating appeals at the regional level. Based on interviews with VA officials, we found that IT improvements could facilitate more consistent implementation of the nonformulary drug request process among VISNs and medical centers, and some of these capabilities were included in the original scope of VA’s Pharmacy Reengineering (PRE) project. Since 2001, VA has been working on PRE with the intention of improving pharmacy operations, customer service, and patient safety by replacing current pharmacy software with new technology. At the national level, VA’s Office of Information and Technology is responsible for planning, executing, and providing oversight for PRE—which includes allocating resources to the project—while its PBM is responsible for developing and prioritizing PRE requirements. PBM officials told us that PRE was expected to make adjudicating nonformulary drug requests and to make collecting and reporting related data easier and more standard systemwide. For example, PBM’s Chief Consultant said that with enhanced software, providers at all VA medical centers would be prompted to complete a series of criteria checks when requesting a nonformulary drug, and if met, the request would be automatically approved. PBM officials also stated that PRE would help improve data collection and reporting related to nonformulary drug requests if it is implemented as intended. VPEs from 15 of the 21 VISNs we spoke with stated that improvements could be made to VA’s pharmacy IT system, and most cited various benefits that improvements could provide, such as better ensuring that prescribing criteria are adhered to and enhancing the ability to collect and report nonformulary drug request data. However, VA has recently restructured the PRE project and has not established plans for delivering all originally proposed capabilities. In July 2009, the department suspended IT projects— including PRE—that had either fallen behind schedule or gone over budget. Subsequently, the department instituted a new IT project management approach that, among other things, requires projects to plan and deliver releases of new IT functions in increments of up to 6 months. In October 2009, VA restarted PRE with plans for an initial set of four increments and has since identified two additional increments, for a total of six increments. According to officials from VA’s OI&T and PBM, the six increments reflect an effort to meet the department’s highest priority pharmacy reengineering needs while delivering new IT functions more frequently. However, capabilities that directly support the nonformulary drug request process and related data collection and reporting are not included in these increments, and as of May 2010, future increments had not been planned. Furthermore, VA’s development and implementation of future increments could be impacted by delays the project is experiencing with the first six increments. Specifically, while increment four was scheduled to be implemented by June 2010, in August 2010 officials said that they intended to implement this increment by the end of the month. Officials also told us that increments five and six may not meet their estimated implementation date of December 2010. As a result, the extent to which PRE will help standardize the nonformulary drug request process, as the project was originally envisioned, is uncertain. Per VA policy, nonformulary drug requests must be adjudicated within 96 hours; however, VA is unable to determine the total number of adjudications that exceed this standard due to limitations in the way data are collected, reported, and analyzed. While the total number of nonformulary drug request adjudications that exceed 96 hours is unknown, we found that data reported to VA’s PBM on quarterly average adjudication times for medical centers are sufficient to demonstrate that not all requests are adjudicated within this time frame. Additionally, PBM has limited oversight of the timeliness of appeals of denied nonformulary drug requests. VA policy requires that nonformulary drug requests be adjudicated within 96 hours, but it is unable to determine the total number of adjudications that exceed this standard due to limitations in the way data are collected, reported, and analyzed. As previously noted, VISNs are required to report nonformulary drug request average adjudication times at the VISN and medical center levels to VA’s PBM on a quarterly basis. VA’s decision to limit data collection and analysis of the timeliness of nonformulary drug request adjudications to average adjudication times has oversight implications compared to collecting and analyzing data on individual requests. First, without collecting and analyzing request-level data, the total number of adjudications that exceed 96 hours is unknown systemwide. Second, averages can be strongly influenced by the presence of a few extreme values, or outliers, and may not give an accurate view of the typical adjudication times at medical centers. Additionally, inconsistencies in the way nonformulary drug request data are collected and reported across VA means that data reported for some VISNs and their medical centers may not be entirely accurate or complete. VPEs for 8 of the 21 VISNs told us that medical centers in their regions may include requests for restricted national formulary drugs in the nonformulary drug request data that they report to VA’s PBM. If quarterly average adjudication times were to exceed 96 hours at medical centers within these VISNs, it would not be possible to determine whether this was the result of requests for restricted national formulary drugs or requests for nonformulary drugs. PBM officials said that they were not aware of this practice and would remind VPEs that only requests for nonformulary drugs are to be reported. Also, VISNs and medical centers determine how they will collect and report data on nonformulary drug requests made through paper-based forms and direct verbal communications with adjudicators, and some medical centers may not include these types of requests in reported data. Specifically, officials whom we interviewed from one medical center told us that only requests for 26 nonformulary drugs are made through CPRS and reported to PBM, while other nonformulary drug requests are made through direct communications with adjudicators to manage workloads. The VPE for this medical center said that steps are being taken to ensure that it includes all nonformulary drug requests in the data it reports. Finally, six medical centers did not report nonformulary drug request data for every quarter in 2009. PBM officials told us they were not aware of this issue and would ensure that VPEs check that all medical centers report data. While VA is unable to determine the total number of nonformulary drug request adjudications that exceed 96 hours, we found that data reported to VA’s PBM on quarterly average adjudication times for medical centers are sufficient to demonstrate that not all requests are adjudicated within this time frame. To conduct our review of the data reported to PBM, we limited our analysis to the 13 VISNs and their medical centers where VPEs told us that requests for restricted national formulary drugs are not included in reported nonformulary drug request data. Therefore, even though reported data for these VISNs and medical centers may be incomplete due to, for example, missing paper-based and verbal requests, the data are sufficiently reliable to show that at least some nonformulary drug requests are not being adjudicated within VA’s 96-hour standard. Specifically, we found that during 2009, 7 of these VISNs each had one or more medical centers that took longer than 96 hours—on average—to adjudicate nonformulary drug requests in a given quarter. Quarterly average adjudication times that exceeded 96 hours within the 7 VISNs ranged from just over 97 hours at one medical center to 240 hours at another medical center. Officials from VA’s PBM told us that they analyze nonformulary drug request data aggregated at the VISN level to monitor the timeliness of adjudications; however, this approach may not alert them to adjudication problems occurring at medical centers. PBM officials stated that VISNs and medical centers are primarily responsible for ensuring compliance with nonformulary drug request policies; thus, while medical center-level nonformulary drug request data are collected and reported to PBM, it analyzes data aggregated at the VISN level to ensure timely adjudications and expects VISNs and medical centers to monitor medical center-level data. PBM officials told us that they would follow-up with VISNs if aggregated data showed an average adjudication time that was greater than 96 hours. However, while at least 7 VISNs had medical centers with reported quarterly average adjudication over 96 hours in 2009, none of the VISN-level averages exceeded VA’s standard. At the national level, VA’s PBM does not have the framework in place to ensure that appeals of denied nonformulary drug requests are resolved in a timely fashion. While PBM officials told us that they expect nonformulary drug request appeals to be adjudicated in a timely manner, they have not established a time frame in policy. Most VPEs also told us that they expect appeals to be adjudicated in a timely manner, and some stated that the 96-hour nonformulary drug request adjudication threshold also applies to appeals. However, we found that not all appeals processes may be structured to produce timely results. For example, officials from one medical center told us that the local P&T committee adjudicates appeals for nonformulary drugs that are not urgently needed. However, one official noted that appeals that go to the P&T committee can take a month or more to resolve as they are dependent on the P&T committee’s meeting schedule. Other VPEs also stated that the medical centers in their regions may require P&T committees to adjudicate appeals. Furthermore, VA’s PBM does not require VISNs and medical centers to collect and analyze data on the nonformulary drug request appeals process; therefore, the number of appeals, outcomes, and adjudication times are unknown systemwide. Of the 21 VISNs, only one VPE reported that the VISN collects and analyzes data on nonformulary drug request appeals. Some VPEs told us that medical centers may track nonformulary drug request appeals data. However, officials from three of the medical centers whom we interviewed told us that their sites do not collect such data, with an official from one noting that this is because the medical center has yet to receive any appeals. An official from the fourth medical center whom we interviewed said that appeals are published in P&T committee meeting notes, but that the medical center does not aggregate these data. VA obtains beneficiary input on the national formulary mainly through VSO meetings and complaints, though some VISNs have taken additional steps to seek this input. Officials from VA’s PBM told us that they make the drug review process transparent to veterans through online information about the national formulary, and some VPEs and medical center officials described undertaking other activities to educate beneficiaries. At the national level, VA officials are considering options for increasing beneficiary input on the national formulary and improving the transparency of the drug review process, and most VPEs and medical center officials told us there could be benefit to doing so. VA officials told us that they may obtain beneficiary input on the national formulary through VSO meetings; although the extent to which pharmacy staff attend these meetings varies among VISNs and medical centers, and officials said that national formulary issues are not frequently discussed. All VPEs told us that medical centers in their regions hold local VSO meetings, and many said that there are VSO meetings held at the regional level as well. However, 3 of the 21 VPEs said that they or a VISN-level pharmacy representative regularly attend VSO meetings either at the regional or local level. Eleven VPEs said that they sometimes attend these meetings, and 7 said that they do not attend, but noted that pharmacy staff at medical centers may attend the meetings. Of the medical center chiefs of pharmacy we interviewed, one attends VSO meetings at the medical center regularly, two attend if invited, and one does not currently attend these meetings. While officials from most VISNs and medical centers whom we interviewed told us that pharmacy benefits are discussed during VSO meetings, many also said that issues related to the national formulary are not often raised. Rather, they stated that pharmacy benefit concerns tend to focus on operational issues, such as copayments and ordering medication refills. A few VPEs noted that when questions about the national formulary are raised during VSO meetings, they are usually patient-specific and addressed outside of the meetings. At the national level, the Chief Consultant from VA’s PBM said that, as necessary, he discusses national formulary issues at VSO meetings held by VA’s Under Secretary for Health on a quarterly basis; however, the Chief Consultant has only attended one of these meetings in the past 6 years. He said that he also receives occasional questions from VSO representatives about the national formulary. Outside of VSO meetings, most VPEs and medical center officials said that VA obtains beneficiary input on the national formulary through complaints by veterans or those acting on their behalf, such as providers, patient advocates, or members of Congress. Almost all VPEs noted that, at the regional level, they do not receive many complaints related to the national formulary and that most complaints are handled locally. Officials from three of the four medical centers we spoke with discussed receiving complaints on the national formulary. For example, an official from one medical center said that the medical center receives complaints from patients who transferred to VA from the private sector and want to stay on a medication that is not on VA’s national formulary. Officials from VA’s PBM told us that, at the national level, they occasionally receive complaints about the national formulary, but they do not routinely monitor beneficiary input in a centralized way. For example, while patient advocates are required to collect data on veteran complaints at medical centers, PBM officials reported that they do not have information on these or other local complaints. PBM officials also told us that while VA’s Office of Quality and Performance administers the Survey of Healthcare Experiences of Patients, this survey is limited in scope and they do not use it to obtain beneficiary input on the national formulary. PBM officials reported that they are not aware of other surveys conducted for this purpose. Some VISNs have taken additional steps to seek beneficiary input on the national formulary. For example, one VPE whom we interviewed conducts site visits at medical centers in the region and talks to beneficiaries about national formulary issues during these visits. Another VPE said that the VISN recently added a 2-hour session at the end of its Executive Leadership Council meetings for beneficiaries to attend and discuss concerns. The VPE said that so far pharmacy benefit concerns have been raised at every meeting, including concerns about access to national formulary and nonformulary drugs. A VPE from a third VISN said that the region tried adding comment cards for pharmacy suggestions, but that they did not receive many suggestions. Officials from VA’s PBM told us that they make the drug review process transparent to beneficiaries through national formulary information that is available online. Our review of PBM’s Web site found that PBM posts the national formulary listing via an Excel spreadsheet, with a separate spreadsheet that highlights formulary changes. In addition, PBM provides a link to its Ez-Minutes newsletter, which is accessible online or through an e-mail subscription. Ez-Minutes provides a listing of national formulary decisions, but does not provide context for these decisions, such as when a drug is made nonformulary due to safety concerns. PBM also posts documents related to the drug review process on its Web site, such as drug monographs and criteria for use documents. Finally, the Web site provides answers to frequently asked questions about the national formulary. In addition to the information provided by VA’s PBM, some VPEs and medical center officials described undertaking other activities to educate beneficiaries on VA’s drug review process at the regional and local levels. For example, one VPE whom we interviewed said that the VISN had begun a new program called, “Formulary Awareness: Veterans Helping Veterans.” The VPE told us that this program has a number of components including recruiting individuals to be in waiting rooms and wear buttons that say “Ask! Is your medication on formulary?” and providing brochures, pens, and tent cards at medical centers with information that includes a national formulary fact of the month. Another VPE said that the VISN sends a newsletter to veterans in its region that includes a section on how the national formulary works, points veterans to PBM’s Web site, and provides pharmacist contact information if veterans have any questions. Likewise, one medical center official whom we interviewed posts explanations about why VA has a national formulary on the bulletin boards in pharmacy waiting room areas. Also, officials from three of the medical centers whom we interviewed noted that they send letters to beneficiaries when national formulary changes impact them, and officials from the fourth medical center said that they ask providers to inform veterans of these changes. At the national level, VA officials are considering options for increasing beneficiary input on the national formulary and improving the transparency of the drug review process. Options were discussed during a MAP meeting in January 2010, and while no formal decision was made, the overall consensus was to try to work within existing lines of communication. Following this meeting, officials from VA’s PBM told us that MAP and the VPE Committee were in discussions to develop a process whereby veteran input at local VSO meetings could be reported and addressed nationally. The officials said that they would like to use local VSO meetings as a mechanism for obtaining input, because it would be easier for veterans to travel to meetings in their local area and these meetings may allow for input on not only national formulary issues, but also other pharmacy benefit issues that may be local in scope. In June 2010, options were again discussed during a meeting of MAP and the VPE Committee. PBM’s chief consultant told us that a final decision was not made during this meeting, but that the next step is to discuss the issue with VHA management. Officials from most VISNs and medical centers we interviewed told us that there could be benefit to increasing beneficiary input on the national formulary or improving the transparency of VA’s drug review process, and a number gave suggestions for doing so. For example, one medical center official said that the Ez-Minutes newsletter contains technical language and that it would be beneficial for VA’s PBM to create something that was easier for beneficiaries and their representatives to understand. Likewise, a VPE suggested that national formulary changes be sent to local VSOs along with non-technical explanations of the reasons for the changes. Another VPE said that one way to better obtain beneficiary input on the national formulary would be to survey patients through an independent organization. While VA officials are considering options for increasing beneficiary input on the national formulary and improving the transparency of the drug review process, they have concerns about formally involving beneficiaries or their representatives in national formulary decisions. Specifically, this matter was raised during the January MAP meeting around a discussion about DOD’s Uniform Formulary Beneficiary Advisory Panel. During our interviews, VPEs and medical center officials also raised concerns about this issue. Their concerns included that lay people may not have the technical knowledge to make evidence-based decisions, and that they could be unduly influenced by direct to consumer advertising from pharmaceutical companies. Officials were also concerned that another layer of review would slow down the drug review process. We spoke with DOD officials about the Uniform Formulary Beneficiary Advisory Panel, and they said that although the panel’s input on DOD’s formulary decisions is limited, it has provided useful feedback on how to operationalize formulary decisions, and resulted in DOD communicating formulary decisions in less technical terms to beneficiaries. In 2009, VA provided millions of prescriptions to veterans through its pharmacy benefit. While VA’s process for reviewing drugs to decide whether they should be included on its national formulary is overseen by its PBM, VISNs and medical centers are responsible for implementing the nonformulary drug request process, and there is variation in the approaches that VISNs and medical centers take. For example, some VISNs and medical centers have more automated approaches to adjudicating nonformulary drug requests and collecting and reporting required data than others. In response to recommendations we made in our 2001 report, VA established a requirement for routine nonformulary drug requests to be adjudicated within 96 hours. However, some adjudications continue to surpass this threshold, and data reported to monitor timeliness are not always accurate or complete for all VISNs and their medical centers. Additionally, reported data are only required to include average adjudication times for nonformulary drug requests, which do not capture the total number of adjudications that fall outside VA’s 96- hour standard. Finally, VA does not require that appeals of denied nonformulary drug requests are resolved within a certain time frame or that the outcomes of appeals are tracked. Given these limitations, additional steps are needed to ensure that veterans receive clinically necessary nonformulary drugs in a timely manner. VA is in the process of making changes to its pharmacy IT system through its PRE project, which could help facilitate more consistent implementation of the nonformulary drug request process among VISNs and medical centers. We previously reported on delays and challenges VA has faced implementing PRE, and it remains unclear when PRE will be complete. If PRE does not move forward, VA will continue to rely on its current IT system to manage its pharmacy benefit and depend on locally developed IT solutions to adjudicate nonformulary drug requests and collect data on outcomes. To provide assurance that requests for nonformulary drugs are adjudicated in a timely fashion, we recommend that the Secretary of Veterans Affairs take three actions. Specifically, the Secretary should direct the Under Secretary for Health to establish mechanisms to ensure that: reported nonformulary drug request data are accurate and complete; reported nonformulary drug request data are collected at the request-level and analyzed by VA’s PBM, VISNs, and medical centers at this level; and appeals of denied nonformulary drug requests are tracked. Additionally, we recommend that the Secretary of Veterans Affairs direct the Chief Information Officer to clarify plans regarding when functionality related to the nonformulary drug request process will be implemented under PRE. In commenting on a draft of this report, VA stated that it generally agreed with our conclusions and concurred with our recommendations. VA’s comments are reprinted in appendix I. Specifically, with regard to our first recommendation to establish mechanisms to ensure that requests for nonformulary drugs are adjudicated in a timely fashion, VA set a target date of October 30, 2010 for developing these mechanisms and plans to implement them during the first quarter of fiscal year 2011. With regard to our recommendation to clarify plans for when functionality related to the nonformulary drug request process will be implemented under PRE, VA acknowledged the importance of improving the nonformulary drug request process through PRE, but stated that addressing patient safety issues in VA’s current pharmacy software takes precedence. VA reported that the department intends to complete field testing of currently approved PRE increments related to patient safety by November 1, 2010. VA further stated that it can then begin an analysis, which could be completed within 90 days, to determine how improvements to the nonformulary drug request process will be addressed in future PRE increments. We appreciate VA’s focus on patient safety within PRE, but reiterate the importance of VA clarifying its plans for the remainder of the project. VA also provided technical comments, which we incorporated where appropriate. We are sending copies of this report to the Secretary of Veterans Affairs and appropriate congressional committees. In addition, the report is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have questions about this report, please contact me at (202) 512-7114 or at dickenj@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made key contributions to this report are listed in appendix II. In addition to the contact named above, Jennifer Grover, Assistant Director; Mark Bird; Leonard Brown; Martha Kelly; Drew Long; Denise McCabe; Lisa Motley; Jessica Smith; Rachel Svoboda; Eric Trout; and Merry Woo made key contributions to this report. | In 2009, the Department of Veterans Affairs (VA) spent nearly $4 billion on prescriptions for veterans. In general, VA provides drugs on its national formulary. However, all VA medical centers must have a nonformulary drug request process that is overseen by their regional Veterans Integrated Service Network (VISN). This report responds to a House Committee on Appropriations report directing GAO to review VA's formulary process and to an additional congressional request. Specifically, GAO reviewed (1) the process VA uses to review drugs for its national formulary, (2) the approaches VISNs and medical centers take to implementing the nonformulary drug request process, (3) the extent to which VA ensures the timely adjudication of nonformulary drug requests, and (4) the mechanisms VA has in place to obtain beneficiary input on the national formulary and make the drug review process transparent. GAO reviewed VA policy guidance and VA's pharmacy-related information technology (IT) initiatives, analyzed 2008 and 2009 drug review data and 2009 nonformulary drug request data, and interviewed VA officials from the national level, each VISN, and a judgmental sample of four medical centers. VA uses a standardized process to review drugs for its national formulary that is coordinated at the national level by its Pharmacy Benefits Management Services (PBM). The Chief Consultant from VA's PBM told us that most drug reviews are initiated in response to FDA's approval of drugs for use on the market. To begin the process of deciding whether to include a drug on the national formulary, PBM develops evidence-based drug monographs that include information on safety, efficacy, and cost. PBM seeks comments on these monographs from VISN and medical center staff and, when appropriate, subject-matter experts. Once a monograph is complete, PBM sends it to its Medical Advisory Panel and the VISN Pharmacist Executive Committee, which review the monograph and vote on whether to add the drug to the national formulary. According to information provided by PBM, reviews for a majority of the drugs VA considered for addition to the national formulary in 2008 and 2009 were completed within a year of FDA approval, but there were a number of factors, such as safety concerns, that caused some to take longer. VISNs and medical centers vary in how they implement the nonformulary drug request process, including how they adjudicate nonformulary drug requests, collect and report required data to VA's PBM, and address appeals of denied requests. GAO found that IT enhancements could help facilitate more consistent implementation of the process. Although VA is working on replacing its pharmacy IT system, officials could not tell GAO whether components that would support the nonformulary drug request process will be implemented. VA requires that nonformulary drug requests be adjudicated within 96 hours, but it is unable to determine the total number of adjudications that exceed this standard due to limitations in the way data are collected, reported, and analyzed. While the total number of nonformulary drug request adjudications that exceed 96 hours is unknown, GAO found that data reported to VA's PBM on quarterly average adjudication times for medical centers are sufficient to demonstrate that not all requests are adjudicated within this time frame. Additionally, PBM does not have the framework in place to ensure that appeals of denied nonformulary drug requests are resolved in a timely fashion. VA obtains input from beneficiaries on the national formulary mainly through Veterans Service Organization meetings and complaints, though some VISNs have taken additional steps to seek this input. Officials from VA's PBM told GAO that they make the drug review process transparent to veterans through national formulary information available on PBM's Web site, and some VISN and medical center officials described undertaking other activities to educate beneficiaries. At the national level, VA officials are considering options for increasing beneficiary input on the national formulary and improving the transparency of the drug review process, and most VISN and medical center officials told us there could be benefit to doing so. GAO recommends that VA establish additional mechanisms to ensure nonformulary drug requests are adjudicated in a timely fashion. VA concurred with this recommendation. |
FDA receives annual appropriations to conduct its medical product responsibilities; these appropriations include amounts derived from user fees paid by industry in connection with FDA activities. FDA’s medical product responsibilities include oversight of the safety and effectiveness of medical products marketed for sale in the United States, regardless of whether they are manufactured domestically or overseas. The agency’s role is far-reaching and its responsibilities include oversight of medical products both before and after they are marketed in the United States. Each year, the request for FDA’s resources is submitted to Congress as part of the President’s Budget request. FDA develops and submits supporting information for the request in the budget justification that is submitted to the subcommittees with jurisdiction over FDA funding as part of the annual appropriations process. This information reflects how FDA proposes to meet its mission, goals, and objectives and assists Congress in understanding whether FDA will require significant changes in levels of appropriations. Guidance issued by OMB, which assists the President in overseeing the preparation of the federal budget, directs agencies to incorporate the cost of fulfilling all statutory requirements and responsibilities in their submissions to OMB for consideration in developing the President’s Budget request. We have also issued guidance on the development of comprehensive and reliable resource estimates, which includes recognition of the basic elements of such estimates. For example, these elements include complete and reliable data, such as data on the agency’s current resources, workload and performance; provisions for program uncertainties; adjustment for inflation; recognition of any exclusions; and an independent review of the estimates. In fiscal year 2008, FDA’s funding totaled $2.2 billion. Of this amount, about $500 million was derived from user fees collected from industry and made available until expended. The remaining amounts, about $1.7 billion, were derived from the General Fund of the Treasury and available during fiscal year 2008. Both user fee funding and fiscal year appropriations are made available through the annual appropriations process. About $1.2 billion—over half of FDA’s total funding—supported its medical product programs, including about $750 million in fiscal year appropriations and about $440 million in user fee funding. Over half of this funding—$681 million—supported the drug program, while $234 million supported the biologics program and $275 million supported the devices program. FDA’s total funding is a small portion of federal government and HHS funding. In fiscal year 2008, the federal government’s funding totaled approximately $3 trillion, of which about $722 billion was made available to fund HHS activities, including those at FDA. These amounts reflect both discretionary spending and mandatory spending. (See fig. 1.) All of FDA’s programs involve discretionary spending. User fees are paid in connection with FDA’s drugs, biologics, and devices programs’ review of applications for new medical products and inspections of mammography facilities. The Prescription Drug User Fee Act of 1992 (PDUFA) was enacted to expedite the review of applications for new drugs and new biologics. PDUFA authorized FDA to collect user fees from drug and biologic sponsors, typically manufacturers, to support the process of reviewing new drug applications (NDA) and biologics license applications (BLA). Likewise, the Medical Device User Fee and Modernization Act of 2002 (MDUFMA) authorized FDA to collect user fees from device sponsors to support the process of reviewing applications for certain new devices. In both cases, FDA’s authority to collect fees and use the amounts collected must be provided in appropriations acts. Both PDUFA and MDUFMA require FDA to apply all user fee funding to support the agency’s process for reviewing applications for certain new medical products, and preclude the agency from using this funding for other agency activities. In fiscal year 2008, agency activities not funded with user fees included, for example, the agency’s oversight of the safety of human tissues, review of applications for generic drugs, inspections unrelated to the agency’s review of new medical products, and some postmarket safety oversight activities. PDUFA and MDUFMA user fee funding only partially covers FDA’s costs for reviewing applications for certain new medical products and associated activities. FDA is also required to use a specified amount of its fiscal year appropriations to support its review of these applications. Within FDA, three centers have primary responsibility for ensuring the safety and effectiveness of medical products. The Center for Biologics Evaluation and Research (CBER) is responsible for overseeing biologics; the Center for Drug Evaluation and Research (CDER) is responsible for overseeing drugs and some therapeutic biologics; and the Center for Devices and Radiological Health (CDRH) is responsible for overseeing devices and for ensuring that radiation-emitting products, such as microwaves and x-ray machines, meet radiation safety standards. Among other things, these centers evaluate the safety and effectiveness of new medical products prior to marketing, monitor the safety and effectiveness of marketed products, oversee the advertising and promotion of marketed products, formulate regulations and guidance, conduct research, communicate information to industry and the public, and set their respective medical product program’s priorities. In addition to the work of the three centers, the Office of Regulatory Affairs (ORA) conducts field activities for all of FDA’s product centers. Field activities include conducting inspections of domestic and foreign establishments involved in manufacturing medical products, examining medical products offered for import, and collecting and analyzing samples. Medical product program resources include funding for center activities and field activities. Center activity funding represents funding for the three centers—CDER, CBER, or CDRH—and field activity funding represents ORA funding for all medical product programs. As part of its oversight responsibilities, FDA reviews applications submitted by manufacturers for medical products they wish to market in the United States to ensure that new products are safe and effective, inspects establishments producing medical products to ensure manufacturing processes meet quality standards, reviews reports of adverse events to monitor the safety of marketed medical products, and examines advertising and other promotional materials to ensure they are not false and misleading. FDA’s oversight of medical product safety and effectiveness typically begins when medical product sponsors develop a new product, long before such products are marketed for sale. For example, FDA requires sponsors to submit an investigational new drug (IND) application before beginning clinical trials (studies in humans) of a new drug or new biologic. The IND application provides FDA with extensive information about the product, including safety and manufacturing information about the product, and outlines the sponsor’s plans for clinical trials. FDA assesses this preliminary information to ensure that the product is reasonably safe to begin studying in humans. While FDA does not issue a formal approval to the sponsor regarding an IND application, it can prohibit the start of a clinical trial by placing it on hold if, for example, the agency determines that human volunteers would be exposed to an unreasonable and significant risk of illness or injury. Sponsors often request guidance and feedback from FDA during the process of drug and biologic development. Before and during clinical trials, FDA may meet with sponsors to provide guidance on the design of the clinical trial. In addition, FDA may issue a written evaluation of particular aspects of a clinical trial—known as a special protocol assessment. FDA may also meet with sponsors after the completion of a successful clinical trial to discuss the information the agency would expect to see submitted to the agency for marketing approval. FDA’s approval is required before new drugs and biologics can be marketed for sale in the United States. To obtain FDA’s approval, sponsors must submit an application containing data on the safety and effectiveness of their new medical product as determined through clinical trials and other research. For example, sponsors must request approval for a new drug or new biologic by submitting an NDA or BLA. FDA reviews data included in these applications to determine whether the product is safe and effective for its intended use. FDA also examines proposed product labeling to ensure that it clearly states the condition and population the product is intended to treat. After completing its assessment of the information in the application and any subsequent submissions of additional information, known as application resubmissions, FDA determines whether to approve the product for marketing. After FDA approves a product, manufacturers requesting changes to product labeling, manufacturing, dosing, or usage must submit an application supplement to obtain FDA approval. In addition to its responsibility for approving new drugs prior to marketing, FDA approval is also required before generic drugs—drugs that are copies of already approved new drugs—can be marketed for sale in the United States. Sponsors of generic drugs may obtain FDA approval by submitting an abbreviated new drug application (ANDA) to the agency for review. The ANDA contains data showing, among other things, that the generic drug is bioequivalent to, or performs in the same manner as, a drug approved through the NDA process. Similar to its review of NDAs, FDA reviews information submitted in the application, including proposed product labeling. To request FDA approval of proposed changes to product labeling, manufacturing, dosing, or usage after a generic drug is approved, sponsors must submit an ANDA supplement. FDA is also responsible for overseeing the safety and effectiveness of devices. Devices are classified into one of three classes—class I, II, or III—based on the level of risk posed to the patient or user and the controls necessary to reasonably ensure safety and effectiveness. Class I devices are those that pose the lowest risk, and class III devices are those that pose the highest risk. Some devices are subject to one of two types of FDA review before they may be marketed for sale in the United States. Some class II devices are required to obtain FDA clearance through a premarket notification process, whereby a sponsor must demonstrate to FDA that the new device is substantially equivalent to a device that FDA previously approved or cleared for marketing. In contrast, class III devices are generally required to obtain FDA approval through a more stringent premarket approval process, whereby a sponsor must provide evidence, typically including clinical data, to demonstrate with reasonable assurance that the new device is safe and effective. As with new drugs and biologics, FDA’s review of these applications includes an assessment of product labeling and usage. FDA is required to review certain medical product applications within specified time frames. For example, FDA is generally required to review NDAs, BLAs, and ANDAs within 180 days of receipt. PDUFA also established performance goals to speed up FDA’s process for reviewing NDAs and certain BLAs. These performance goals can be grouped into three main categories—those related to the speed at which the agency (1) reviews applications and supplemental materials, (2) schedules and holds meetings with sponsors, and (3) issues written guidance as requested by sponsors. Multiple performance goals exist within each of these broad categories. For example, one performance goal is that FDA review and act on 90 percent of certain NDAs and BLAs within 10 months of their receipt; another is that FDA schedule 90 percent of certain meetings with sponsors within 30 days of receiving the sponsor’s meeting request. In addition, MDUFMA also established similar types of performance goals related to the timeliness of FDA’s process for reviewing applications for new devices subject to the premarket approval and premarket notification process. As part of its oversight responsibilities, FDA conducts inspections of domestic and foreign establishments. Specifically, FDA conducts inspections of clinical trial sites to ensure the protection of human subjects and the accuracy and validity of clinical trial data reported to the agency. FDA also inspects medical product manufacturing establishments to ensure that manufacturing processes adhere to current good manufacturing practices requirements and regulations. Inspections of manufacturing establishments may occur before medical products are marketed in the United States. To ensure continued adherence to current good manufacturing practices requirements, FDA may also inspect establishments after the product is on the market. FDA is required to inspect certain types of establishments with a particular frequency; however, requirements governing the frequency of these inspections differ. For example, FDA is required to conduct inspections of certain types of establishments every 2 years—including domestic drug and device manufacturers, as well as domestic blood banks. However, there are no comparable requirements regarding the frequency with which FDA should conduct inspections of other types of domestic establishments, such as domestic human tissue banks, or some foreign establishments, including those manufacturing drugs and devices marketed for sale in the United States. FDA does not have the authority to require foreign establishments to allow the agency to inspect their facilities. However, FDA has the authority to prevent the importation of products manufactured at establishments that refuse to allow an FDA inspection. Because no medical products are absolutely safe—there is always some risk of an adverse event—FDA continues to assess products’ risks and benefits after the products are on the market by using multiple strategies. One such strategy is to collect and analyze adverse event reports related to the use of medical products and monitor them to identify potential safety issues associated with the use of a specific medical product. FDA receives adverse event reports from various sources, including medical product manufacturers, physicians, and the public. FDA requires medical product manufacturers and others to submit reports of adverse events associated with the use of a medical product to FDA at certain frequencies depending on the seriousness of the adverse event and the amount of time the product has been on the market. Physicians and the public may voluntarily submit reports of adverse events to FDA at any time. The agency’s review of these reports helps to identify, among other things, unexpected adverse events, product quality problems, and product use errors related to marketed medical products. These reviews provide information that may lead FDA to require the product’s sponsor to conduct a safety study, make changes to product labeling, or recall a product from the market. FDA oversees the advertising and promotion of prescription drugs, biologics, and some devices to ensure that information disseminated about medical products is not false or misleading. FDA regulations also require that product promotions include a balanced disclosure of side effects, contraindications, and warnings. In addition, advertising and promotions may not recommend or suggest any use of a product that is not included in the product’s approved labeling. FDA regulates the content of advertising and promotions regardless of whether they are directed toward consumers or medical professionals. FDA regulations require manufacturers to submit to the agency all final advertising and promotional materials for drugs and biologics at the time the materials are first disseminated to the public. In contrast, FDA does not require manufacturers to submit advertising and promotional materials for devices at the time of their initial dissemination. Companies may also voluntarily submit draft advertising and promotional materials to FDA prior to their public release in order to obtain advisory comments from the agency. Although FDA is not required to review all materials submitted, reviewing final and draft advertising and promotional materials is the agency’s primary mechanism for ensuring that information disseminated about drug and biologic products is not false or misleading. To supplement its examination of submitted materials, FDA staff also monitor the content of disseminated advertising and promotional materials, for example, by attending medical conferences, reviewing company Web sites, and following up on complaints received. Funding and staffing for FDA’s medical product programs have increased mostly as a result of user fee funding, which is primarily directed toward the agency’s review of new medical products. FDA is required to apply a certain amount of its fiscal year appropriations to support user fee activities, and agency officials said that this requirement limits the resources available for other medical product program activities that are not supported by user fee funding. In addition to their concerns about the sufficiency of their resources, FDA officials are concerned about the agency’s ability to hire and retain staff in certain scientific occupations. Funding for FDA’s medical product programs increased between fiscal year 1999 and fiscal year 2008, mostly due to increases in user fee funding. Medical product program funding increased 112 percent overall, from about $562 million in fiscal year 1999 to about $1.2 billion in fiscal year 2008. (See fig. 2.) This funding increase was greater than the GDP rate of inflation across this time period—25 percent. Over half of the increase in medical product program funding was due to growth in user fee funding, which grew four times as fast as fiscal year appropriations during this 10-year period. Between fiscal years 1999 and 2008, user fee funding increased 268 percent from about $120 million to about $443 million, while fiscal year appropriations increased 69 percent from about $441 million to about $746 million. Over three-quarters of the increase in user fee funding over this period supported the drugs program, with the remaining portion supporting the biologics and devices programs. Appendix I provides additional information on funding and staffing resources for FDA’s medical product programs. Between fiscal years 1999 and 2008, total funding for FDA’s medical product programs—including fiscal year appropriations and user fee funding—grew 112 percent. This rate of growth was higher than the rates of growth in total funding for the rest of FDA (86 percent), as well as total funding for HHS (98 percent) and the federal government (87 percent). The high rate of growth in total funding for FDA’s medical product programs was due to large increases in FDA’s user fee funding. Fiscal year appropriations for FDA’s medical product programs grew at a slower rate than fiscal year appropriations for other FDA programs between fiscal years 1999 and 2008, as shown in table 1. Fiscal year appropriations for FDA’s medical product programs also grew at a slower rate (69 percent) than discretionary funding for HHS (74 percent) and the federal government (103 percent). PDUFA and MDUFMA require the agency to apply all user fee funding, as well as a specified amount of fiscal year appropriations, to support user fee activities that are related to the agency’s process for reviewing applications for new drugs, new biologics, and certain new devices. Taking this requirement into account, we found that total funding for the medical product programs’ user fee activities increased eight times faster than funding for the programs’ other activities between fiscal years 1999 and 2008. Specifically, funding for user fee activities increased 207 percent over the 10-year period while total funding for the programs’ other activities increased 25 percent—the same rate as inflation over this period as measured by the GDP price index. As funding for user fee activities grew faster between fiscal year 1999 and 2008 than funding for other program activities not funded with user fees, a declining share of fiscal year appropriations was available to other program activities. In fiscal year 1999, the medical product programs allocated 48 percent of their total $562 million funding—including 33 percent of their fiscal year appropriations—to user fee activities. In fiscal year 2008, these programs allocated 69 percent of their total $1.2 billion funding—including 51 percent of their fiscal year appropriations—to user fee activities. In fiscal year 2008, the medical product programs allocated 31 percent of their total funding to other program activities not funded by user fees. (See fig. 3.) Although total funding increased, FDA officials reported that the decline in the portion of funding available to activities not funded by user fees has seriously limited the agency’s ability to fulfill its oversight responsibilities in some areas. FDA officials noted a disproportionate growth in funding available for the agency’s user fee activities compared with other agency activities not funded with user fees, such as the agency’s oversight of transfusion-related blood products, human tissues, device compliance and enforcement, and radiological health, as well as its work in reviewing ANDAs, examining drug-related advertising materials, and conducting inspections of establishments manufacturing approved drugs. To supplement our analysis of trends in FDA resources from fiscal years 1999 through 2008, we analyzed how FDA’s medical product programs allocated funding to center and field activities from fiscal years 2004 through 2008. We found that each of the medical product programs allocated most of their annual funding to activities conducted by the centers (CDER, CBER, and CDRH). The programs also provided some funding for field activities conducted by ORA. We found that funding for the medical product programs’ center activities grew three times as fast as funding for the programs’ field activities. We also noted that funding for field activities increased at about the same rate as the GDP inflation rate. (See app. II for additional information on trends in center and field funding and staffing resources for the medical product programs.) Staffing resources for FDA’s medical product programs increased between fiscal year 1999 and fiscal year 2008. The number of FTEs supporting FDA’s medical product programs increased 14 percent from 4,925 FTEs in fiscal year 1999 to 5,626 FTEs in fiscal year 2008. This increase was due solely to a growth in the number of FTEs funded by user fees—the number of FTEs funded by fiscal year appropriations declined. Specifically, the number of medical product program FTEs funded by user fees increased 113 percent—from 856 FTEs in fiscal year 1999 to 1,825 FTEs in fiscal year 2008—while FTEs funded by fiscal year appropriations declined 7 percent, or from 4,069 FTEs in fiscal year 1999 to 3,802 FTEs in fiscal year 2008. FDA officials told us that they had to actively reduce the number of staff by offering buyouts to employees to leave the agency between fiscal years 2004 and 2006 because the agency did not receive enough fiscal year appropriations in these years to maintain staffing levels. According to FDA officials, FTE costs—salary and benefit costs—grew at a faster rate than fiscal year appropriations during this period. Figure 4 displays the number of FTEs from fiscal year appropriations and user fees for each year, fiscal years 1999 through 2008. While our analysis of FDA data shows that the number of medical product program FTEs increased between fiscal year 1999 and 2008, FTEs do not include contractors and therefore provide a partial measure of total staffing resources. FDA could not provide data showing the total number of contractors it used or the total amount of funding it spent on contractors to support its medical product programs over this period. As a result, we could not fully assess the medical product programs’ staffing resources. FDA officials estimated that the agency used an increasing number of contractors to fulfill its medical product responsibilities between fiscal years 1999 and 2008. However, agency officials were unable to provide us with data to corroborate this estimate. According to FDA officials, the decline in the number of FTEs funded by FDA’s fiscal year appropriations limited the agency’s ability to fulfill its medical product oversight responsibilities. FDA officials noted that they do not have enough staff to adequately perform duties that do not receive user fee funding, such as the agency’s review of ANDAs, oversight of product advertising and promotion, and inspections of establishments manufacturing marketed products. As a result, FDA officials noted that the agency’s work in these areas is increasingly backlogged. In addition to their concerns about the adequacy of the agency’s fiscal year appropriations, FDA officials are also concerned about the agency’s ability to hire staff, particularly those in certain scientific occupations. For example, FDA officials noted that the agency is facing challenges hiring biologists, chemists, computer programmers, consumer safety officers, engineers, epidemiologists, mathematical statisticians, medical officers, and pharmacologists, among other occupations. FDA officials noted that the lack of sufficient numbers of staff and extended vacancies in specific occupations resulted in higher workloads and longer hours for current staff, as well as postponed or reduced work in some areas. FDA officials also noted concerns about the agency’s ability to retain staff, particularly those in certain scientific occupations. FDA officials said that a high percentage of staff from the medical product centers and ORA leave their positions—including those who move within FDA, leave the agency, and retire. Specifically, FDA data show that between 2000 and 2007, the average annual percent of staff who left their positions at CDER, CBER, and CDRH ranged from 11 to 13 percent, and at ORA headquarters and regional offices ranged from 6 to 23 percent. However, a portion of these staff stayed within FDA and HHS. FDA officials told us that the loss of any staff from their centers presents challenges as it takes time for the centers to hire and train new staff. For example, FDA officials noted that it takes about 2 years to effectively train new staff who review applications for new medical products. New laws and a growing workload increased FDA’s medical product oversight responsibilities. FDA did not fulfill its oversight responsibilities between fiscal years 2004 and 2008 in some areas, which agency officials attributed to resource constraints. Laws enacted since 1999 added new requirements that expanded FDA’s medical product oversight responsibilities. On the basis of our review, we found 11 laws that specifically added to FDA’s medical product oversight responsibilities. These 11 laws were enacted between 2002 and 2007. (See fig. 5.) These 11 laws added many additional requirements and authorities to FDA, increasing the agency’s oversight responsibilities ranging from premarket review of medical products to the agency’s oversight of the safety of marketed medical products. These additional oversight responsibilities included an expansion in FDA’s authority to regulate devices, an increase in the amount of information that the agency needs to review before deciding whether to approve new drugs and biologics, and greater authority to monitor the safety of approved products. To implement these new requirements and authorities, FDA, for example, needed to issue new guidance for industry and new operating procedures for staff, and established new committees that the agency needed to consult with to fulfill its oversight responsibilities. One of the many new oversight responsibilities that FDA was charged with was added by MDUFMA. In 2002, MDUFMA instituted a regulatory oversight function for reprocessed single-use devices. MDUFMA required manufacturers of certain devices to submit additional information to the agency validating that reprocessed single-use devices are substantially equivalent to current or previously marketed single-use devices. The law also created a new application for the approval of reprocessed high-risk devices. As a result of these new authorities, FDA created new guidance documents and conducted presentations with industry and healthcare professionals related to the agency’s oversight of these products. According to agency officials, the implementation of this expanded authority resulted in a significant increase in FDA’s workload, particularly in 2002 when FDA officials estimated that about 17 FTEs were dedicated to implementing this authority and developing guidance documents. Another law, the Pediatric Research Equity Act of 2003 (PREA), increased the amount of information that FDA must review to approve a new drug or new biologic. FDA became responsible for reviewing more materials to assess the products’ safety and effectiveness for children, including appropriate information to include on product labeling. Specifically, PREA required sponsors to submit a pediatric assessment containing additional information about the pediatric use of a drug or biologic at the time they submit an application or supplement. As a result of the reviews of these required pediatric assessments, FDA issued 86 PREA-related labeling changes for drugs and biologics between December 2003 and December 2008. A more recent example of a law increasing FDA’s medical product responsibilities is the Food and Drug Administration Amendments Act of 2007 (FDAAA). FDAAA increased FDA’s postmarket oversight responsibilities for medical products by giving FDA authority to require sponsors to conduct studies or clinical trials for approved drugs in cases where FDA has identified new safety concerns. To require such a study, FDA officials said that they document their rationale in a legally enforceable contract with a sponsor. These contracts may outline specific elements of the study design. FDA officials stated that the process of developing such contracts results in additional work for the agency. From the enactment of FDAAA in September 2007 through January 2009, FDA required drug sponsors to conduct 45 postmarket studies for NDAs and biologics sponsors to conduct 15 postmarket studies for BLAs, for drugs and biologics approved before and after the implementation of FDAAA. FDA also faced a growing workload and was responsible for overseeing increasing numbers of marketed products and establishments. FDA’s medical product workload grew between fiscal years 1999 and 2008 in part due to the receipt of an increasing number of applications and application supplements. The number of drug, biologic, and device application materials submitted to FDA grew 30 percent over this period, from 23,079 in fiscal year 1999 to 30,060 in fiscal year 2008. In particular, the number of application supplements grew 48 percent (from 13,694 application supplements in fiscal year 1999 to 20,329 application supplements in fiscal year 2008). The number of medical product applications also increased during this time period by 8 percent, or from 8,313 applications to 8,943 applications. At the same time, the number of applications resubmitted for medical product approval decreased 26 percent from 1,072 to 788 (see fig. 6). In addition to receiving an increasing number of applications and application supplements, FDA’s workload also grew due to an increase in other demands placed on the agency. For example, FDA received 797,889 more reports of adverse events related to medical products in fiscal year 2008 than in fiscal year 1999, an increase of 228 percent. FDA also received 40,193 more drug- and biologic-related advertising and promotional materials to examine (an increase of 115 percent), and 885 more meeting requests from sponsors regarding drug and biologic products in development during this time period (an increase of 56 percent). The complexity of products subject to FDA oversight has also grown, thus increasing the agency’s workload. FDA officials, as well as FDA’s Science Board, reported that rapid advances in science and technology, including the fields of genomics and nanotechnology, have increased the complexity of the medical products submitted to FDA for premarket approval. FDA officials told us that the agency seeks and provides training for its reviewers so they can more effectively review the safety and effectiveness of these increasingly complex products. However, agency officials said that this training results in less time available for staff to perform their routine duties. In addition, FDA officials also increasingly seek the advice of scientific experts from outside the agency, including advisory committee members, to assist in the review of applications for new drugs and new biologics. Similarly, seeking the advice of experts requires additional staff time to obtain and weigh these perspectives. In addition to facing a growing workload, the total number of medical products and establishments FDA oversees also increased between fiscal years 1999 and 2008. FDA is responsible for monitoring the safety of marketed medical products, and as the number of these products and manufacturing establishments has grown, so have the agency’s oversight responsibilities. The number of medical products approved or cleared for marketing has grown 55 percent, or by 41,203 medical products, during this time period. In addition, the total number of establishments registered to produce medical products marketed for sale in the United States—a proxy for the number of establishments subject to FDA oversight and inspection—grew over this time period, due to increases in the number of foreign establishments. However, from fiscal years 1999 to 2008, FDA saw a decrease—2 percent or 311 establishments—in the number of domestic establishments registered to produce medical products. Over the same time period, the number of foreign establishments registered to produce medical products increased by 23 percent or 1,921 establishments. See table 2 for trends in domestic and foreign establishments registered to produce medical products. FDA officials told us that resource constraints hindered the agency’s ability to fulfill all of its medical product oversight responsibilities between fiscal years 2004 and 2008, but the agency also lacked information to manage some of these oversight responsibilities and estimate current and future resource needs. For the two key areas we reviewed where statutory requirements and performance goals set expectations for the agency’s work during this period—review of applications for generic drugs, new drugs, and new biologics, and medical product inspections— FDA did not meet all of its medical product oversight responsibilities. In the other two key areas we reviewed—examination of advertising and promotional materials and review of adverse event reports—we found that while FDA faced an increasing workload, it could not always provide data on the work it performed to fulfill these responsibilities. FDA did not meet all of its medical product oversight responsibilities where requirements and performance goals set expectations for the agency’s work from fiscal years 2004 through 2008. For example, FDA did not meet the requirement to complete its first review of ANDAs within 180 days of receipt during this period. We found that the percent of ANDAs that FDA reviewed within this 180 day requirement declined from 87 percent in fiscal year 2004 to 32 percent in fiscal year 2008. FDA received an increasing number of ANDAs during this time period, and agency officials explained that they were unable to review all applications submitted within the 180 day requirement because they did not have sufficient resources to conduct these reviews. As a result, an increasing number of ANDAs were pending review, creating a backlog. While FDA met most of its PDUFA performance goals related to the speed at which it reviewed NDAs and BLAs and related application supplements, the agency did not meet most PDUFA performance goals related to the speed at which it scheduled and held meetings with sponsors and responded to sponsor requests for information between fiscal years 2004 and 2008. FDA officials explained that they were unable to meet all of these performance goals due to inadequate resources. FDA officials explained that they placed a higher priority on reviewing applications and therefore had fewer resources to schedule and hold meetings or respond to sponsors’ requests for information. FDA also did not meet all of its inspection requirements and requested additional funding to begin conducting more inspections. FDA did not conduct inspections every 2 years as required for two of three types of establishments we reviewed. FDA officials estimated that the agency, on average, conducts inspections of domestic drug manufacturers every 3 years, domestic device manufacturers every 3 or 5 years, and domestic blood banks every 2 years. FDA officials estimated that the agency conducts inspections less frequently for other types of establishments that do not have required time frames for the frequency of inspections— domestic human tissue banks, foreign drug manufacturers, and foreign device manufacturers. (See table 3.) In fiscal year 2008, FDA requested and received additional funding to strengthen field operations and conduct more domestic and foreign inspections of medical product establishments. FDA faced an increasing workload in the other two areas we reviewed— review of adverse event reports and examination of advertising and promotional materials. Agency officials said they lacked sufficient resources in these areas. Similar to what we reported in 1989, we found that FDA lacks information to manage these responsibilities and estimate current and future resource needs. Although adverse event monitoring is a key mechanism for FDA to identify postmarket safety risks related to the use of marketed medical products, agency officials told us that they receive substantially more drug-, biologic-, and device-related adverse event reports than staff can review. Between fiscal years 2004 and 2008, FDA received an increasing number of adverse event reports for medical products, from 635,035 reports in fiscal year 2004 to 1,147,442 reports in fiscal year 2008. However, FDA officials could not provide data showing how many adverse event reports staff review. FDA officials told us that they place the highest priority on reviewing reports of serious adverse events, such as those involving death or severe injury, and unexpected adverse events—those not noted on approved product labeling. Yet, FDA officials were unable to provide data to corroborate their reviews of these reports of serious and unexpected events. In addition, while FDA receives relatively few promotional materials for biologics and devices, the agency receives substantially more drug-related promotional materials than staff can review, according to agency officials. Between fiscal years 2004 and 2008, FDA received a steadily increasing number of final promotional materials—from 45,394 in fiscal year 2004 to 70,509 in fiscal year 2008. Again, FDA could not provide data showing how many drug-related advertising and promotional materials staff review. Although FDA officials told us that they place a high priority on reviewing materials that have the greatest potential to affect public health, they were unable to provide data to corroborate their reviews of these materials. FDA officials have told us that collecting data on the work staff performed would be time-consuming and detract from resources needed to devote to conducting these reviews. While FDA officials noted the agency’s inability to fulfill all of its responsibilities due to resource constraints, FDA does not have the data to develop a complete and reliable estimate of the resources it needs to conduct all of its responsibilities. Specifically, we found that FDA lacked information about its current resources, workload, and performance in some areas, such as with the review of adverse event reports and promotional materials. This basic management information is critical to the development of a complete and reliable resource estimate. FDA officials also told us that the funding amounts requested for FDA and provided by Congress during the past 2 years will permit the agency to respond to its most urgent needs and priorities, although officials also noted that they did not receive enough resources to meet some statutory requirements. For example, agency officials noted that they were unable to inspect certain manufacturing establishments at prescribed intervals due to resource constraints. Furthermore, FDA officials also noted that the agency continues to face significant challenges fulfilling its mission. For more information on the trends in FDA’s workload and resources for the four key areas that we reviewed, see appendix III for FDA’s review of generic drug, new drug, and new biologic applications, appendix IV for inspections of medical product research activities and manufacturing establishments, appendix V for the review of adverse event reports, and appendix VI for the examination of advertising and promotional materials. The growth in the complexity and number of new medical products and the establishments manufacturing them, increasing globalization, and added statutory requirements and responsibilities have translated into mounting and competing demands for FDA’s resources. Concerns regarding the adequacy of these resources are not new, but as demands on the agency have soared in recent years, these concerns have intensified. Earlier this year, we included FDA’s oversight of medical products in our High-Risk Series. Our current examination of FDA’s resources confirms that the agency’s ability to protect Americans from unsafe and ineffective medical products is compromised. The structure of the agency’s funding— its reliance on user fees to fund certain activities, particularly those related to the review of new products—is a driving force behind which responsibilities FDA does and does not fulfill. The approval of new products has increasingly become the beneficiary of the agency’s budget, without parallel increases in funding for activities designed to ensure the continuing safety of products, once they are on the market. The enactment of FDAAA in 2007 gave FDA the ability to apply user fee funding to more postmarket activities for some types of medical products, providing the agency more flexibility in its use of funding. FDA reports that it cannot do all that is asked of it and our analysis of the agency’s activities confirms this. However, as FDA officials told us, the agency’s requests for resources do not reflect all the resources it needs to fulfill its mission, including meeting its statutory requirements. FDA could not provide data showing its workload and accomplishments in some areas. Furthermore, it lacks other basic management information, such as the size of its contractor workforce. Without this information, FDA does not have data to reliably estimate its resource needs—a problem we reported 20 years ago and which served as the basis of our recommendation that FDA collect such data. Since then we have made similar recommendations that the agency improve its management and tracking of its resources and workload. FDA has disagreed with these recommendations, claiming that it lacks the resources to devote to this data collection and that it would detract from its oversight responsibilities. We acknowledge that FDA is facing significant challenges in fulfilling its responsibilities, but continue to believe that developing such information is an essential component of ultimately enhancing the agency’s ability to adequately fulfill its mission. Without such basic data needed for managing its programs, FDA cannot develop sound and justifiable budget requests that reflect all the work that is vital to fulfilling its mission, including meeting its performance goals and its statutory requirements. It is also difficult for others to independently verify the extent to which FDA receives sufficient resources and whether the agency is appropriately utilizing and prioritizing the resources it receives. We recommend that the Commissioner of FDA establish a comprehensive and reliable basis to substantiate the agency’s estimates of its current and future resource needs in a manner consistent with the principles contained in our cost estimating and assessment guide. To do so, we recommend that the Commissioner of FDA take the following four actions: 1. Conduct a comprehensive assessment of the agency’s staffing resources, including its contractor workforce. 2. Gather data on the work the agency conducts to fulfill its responsibilities. 3. Assess the extent to which the agency is meeting its responsibilities. 4. Develop an evidence-based estimate of the resources needed to fulfill all of its responsibilities. We provided a draft of this report to HHS for review. HHS provided comments from FDA. In its comments, FDA agreed with our four recommendations and described the steps it would take to implement them. FDA’s comments are reprinted in appendix VII. FDA also provided technical comments, which we incorporated as appropriate. In its comments, FDA acknowledged that we identified some important issues regarding the challenges the agency faces in meeting its medical product responsibilities. It highlighted the President’s requested increase in the agency’s medical product program funding for fiscal year 2010, which it said would support a life-cycle approach to safety, provide for increased inspections, and support the implementation of requirements included in FDAAA. Specifically, regarding our recommendations, FDA said that a comprehensive assessment of its staffing resources would provide useful information and that it will expand its current staffing assessment process to include its contractor workforce. The agency also said it will conduct a complete inventory of all regulatory work products by FDA center and that it would identify and implement measures to determine how effectively the agency is meeting its responsibilities. Finally, FDA said that it plans to link these measures to its budget and funding allocation. FDA said that this approach will inform the agency about how well it is allocating its resources and help identify what additional resources it needs to fulfill its responsibilities. We believe that the agency’s completion of the activities described, as well as other necessary and related actions to implement our recommendations, should assist FDA in developing a comprehensive and reliable basis for substantiating the agency’s resource needs and help it better manage its medical product programs. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Commissioner of FDA and appropriate congressional committees. The report also will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or crossem@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VIII. Funding resources for the Food and Drug Administration’s (FDA) medical product programs increased 112 percent between fiscal years 1999 and 2008. Of the medical product programs drugs program funding increased 145 percent, from $278.3 million in fiscal year 1999 to $680.9 million in fiscal year 2008; biologics program funding increased 88 percent, from $124.4 million in fiscal year 1999 to $233.5 million in fiscal year 2008; and devices program funding increased 73 percent, from $159.0 million in fiscal year 1999 to $275.3 million in fiscal year 2008. Table 4 displays funding resources for FDA programs for fiscal years 1999 through 2008. Staffing resources supporting FDA’s medical product programs—as measured by the number of full-time equivalent (FTE) staff—varied from year to year, and increased 14 percent between fiscal year 1999 and fiscal year 2008. Specifically, drugs program FTEs increased 22 percent from 2,456 FTEs in fiscal year 1999 to 2,996 FTEs in fiscal year 2008; biologics program FTEs increased 8 percent from 989 FTEs in fiscal year 1999 to 1,066 in fiscal year 2008; and devices program FTEs increased 6 percent from 1,480 FTEs in fiscal year 1999 to 1,564 FTEs in fiscal year 2008. Table 5 displays staffing resources for FDA programs from fiscal years 1999 through 2008. Between fiscal years 2004 and 2008, the drugs, biologics, and devices programs allocated most of their funding and staffing to center activities, leaving a smaller share of resources for field activities. Funding for center activities grew faster than funding for field activities, which increased at nearly the same rate as inflation, as measured by the gross domestic product (GDP) price index. During the same period, staffing resources for center activities increased while staffing resources for field activities decreased. Funding for center activities grew more than three times as fast as funding for field activities between fiscal years 2004 and 2008. Specifically, center funding for all medical product programs combined grew from $675 million to $995 million over this period, an increase of 47 percent, while field funding for all medical programs increased from $173 million to $195 million, an increase of 13 percent. (See fig. 7.) While increases in total center funding outpaced the GDP inflation rate of 12 percent during this period, the rate of increase in total field funding remained close to the GDP inflation rate. Drugs program funding for center activities conducted by the Center for Drug Evaluation and Research (CDER) increased 57 percent from about $373 million in fiscal year 2004 to about $588 million in fiscal year 2008, while funding for field activities conducted by the Office of Regulatory Affairs (ORA) rose 8 percent from about $86 million to about $93 million over this period. The increase in field funding for this program was less than the rate of GDP inflation (12 percent) over this period. Biologics program funding for center activities conducted by the Center for Biologics Evaluation and Research (CBER) increased 45 percent from about $140 million in fiscal year 2004 to about $202 million in fiscal year 2008, while funding for biologics field activities conducted by ORA increased 15 percent over this period, from about $27 million to about $31 million. Devices program funding for center activities conducted by the Center for Devices and Radiological Health (CDRH) increased 26 percent from about $162 million in fiscal year 2004 to about $205 million in fiscal years 2008, while funding for field activities conducted by ORA increased 18 percent, from about $60 million to about $70 million. Over two-thirds of each of the medical product centers’ funding supported their user fee activities in fiscal year 2008. Specifically, CDER, CBER and CDRH each allocated about 78 percent of their centers’ total funding—including an average of 61 percent of the centers’ total fiscal year appropriations—to user fee activities in fiscal year 2008, leaving 22 percent of funding to support the centers’ other activities. In contrast, 23 percent of the medical product programs’ total field funding supported user fee activities, with 77 percent of field funding supporting other activities not funded with user fees. Table 6 displays how the medical product programs allocated funding resources to specific center and field activities. The number of full-time equivalent (FTE) staff supporting center activities grew 8 percent from 4,048 FTEs in fiscal year 2004 to 4,384 in fiscal year 2008, while the number of FTEs supporting field activities conducted by ORA decreased 15 percent from 1,454 FTEs in fiscal year 2004 to 1,243 FTEs in fiscal year 2008. (See fig. 8.) Because counts of FTEs do not include contractors, these data do not fully represent FDA’s staffing resources for these activities. Drugs program staffing resources for CDER activities grew 9 percent from 2,190 FTEs in fiscal year 2004 to 2,396 FTEs in fiscal year 2008, while staffing resources for drugs field activities declined 21 percent from 759 FTEs in fiscal year 2004 to 600 FTEs in fiscal year 2008. Biologics program staffing resources for CBER activities grew 8 percent from 797 FTEs to 858 FTEs, while FTEs supporting biologics field activities declined 13 percent from 241 FTEs to 209 FTEs. Devices program staffing resources for CDRH activities grew 7 percent from 1,061 FTEs to 1,130 FTEs, while staffing resources for devices field activities declined 4 percent from 454 FTEs to 434 FTEs. Table 7 shows how FDA’s medical product programs allocated FTE resources to various center and field activities. The Food and Drug Administration (FDA) faced an increasing workload related to the process for reviewing generic drug, new drug, and new biologic applications between fiscal year 2004 and fiscal year 2008. For example, FDA received 47 percent more applications for generic drugs in fiscal year 2008 than in fiscal year 2004. Even though FDA funding for the review of these applications grew 53 percent over this time period, agency officials said that resource constraints precluded them from reviewing all applications submitted, resulting in a growing number of applications pending review. FDA reviewed an increasing number of abbreviated new drug applications (ANDA) for generic drugs between fiscal years 2004 and 2008. However, FDA received a greater number of applications each year than it was able to review. The number of original ANDAs received for review increased by 47 percent—from 563 in fiscal year 2004 to 830 in fiscal year 2008. During this time period, FDA reviewed an increasing number of ANDAs each year. In fiscal year 2004, FDA reviewed 1,357 ANDAs and in fiscal year 2008 the agency reviewed 1,933 ANDAs—an increase of 42 percent. While the number of ANDAs the agency reviewed each year increased, FDA was not able to review them all. As a result, the number of applications pending review increased 123 percent over the period. (See table 8.) FDA officials told us that they were unable to review all ANDAs because they did not have enough resources to conduct these reviews. Between fiscal year 2004 and fiscal year 2008, FDA conducted an increased amount of work related to the review of new drug and new biologic applications. In particular, FDA was increasingly involved in the process of new drug and new biologic development, which typically occurs years before sponsors submit a new drug application (NDA) or biologics license application (BLA) to FDA for approval. FDA reported that sponsors’ early consultation with the agency generally results in improvements in the safety and effectiveness of the clinical trials. In addition, FDA indicated that the agency’s increased involvement generally improves the quality of information submitted in an application for marketing approval and increases the likelihood that the resulting application will gain faster approval. The number of active investigational new drugs (IND)—representing new drugs and new biologics in development—grew from 12,523 in fiscal year 2004 to 15,020 in fiscal year 2008. To guide the development of these new products, FDA issued an increasing number of written guidance documents to sponsors between fiscal year 2004 and fiscal year 2008. For example, FDA issued 135 responses to clinical holds in fiscal year 2004 and 213 such responses in fiscal year 2008. In addition, while the number of meetings FDA conducted with sponsors regarding new drug development varied from year to year, between fiscal year 2004 and fiscal year 2008 FDA scheduled over 10,000 meetings with sponsors, with between 1,900 and 2,300 such meetings each year. (See table 9.) FDA officials stated that drafting written responses and preparing for and documenting the results of meetings with sponsors requires a substantial amount of staff time. In particular, FDA noted that each meeting typically requires the involvement of at least 15 FDA staff and can require between 120 to 540 hours of staff time. Although FDA was increasingly involved in the process of new drug and new biologic development between fiscal year 2004 and fiscal year 2008, the agency’s review of NDAs and BLAs decreased slightly over the time period, following trends in the number of applications the agency received. As shown in table 9, the total number of original and resubmitted NDAs and BLAs FDA received decreased from 214 applications in fiscal year 2004 to 197 applications in fiscal year 2008. FDA also reviewed a decreasing number of NDAs and BLAs—in fiscal year 2004 FDA reviewed 206 original and resubmitted NDAs and BLAs and in fiscal year 2008 FDA reviewed 161 such applications. FDA also reviewed between about 3,700 and 4,000 efficacy, labeling, and manufacturing NDA and BLA supplements each year during this period. FDA has many performance goals related to its process for reviewing new drug applications. According to FDA officials, the agency places a higher priority on the speed with which it reviews applications for new drugs and biologics, compared to the speed with which the agency responds to sponsor requests for information and scheduling and holding meetings with sponsors. As a result of this prioritization, FDA focused its resources on its review of applications—and we found FDA generally met its performance goals in this area. However, agency officials noted that the agency did not have sufficient resources to meet performance goals related to responding to sponsor requests for information and scheduling and holding meetings. Between fiscal year 2004 and fiscal year 2008, funding for FDA’s review of ANDAs—which is provided solely through FDA’s fiscal year appropriations—increased 53 percent from about $53 million in fiscal year 2004 to about $82 million in fiscal year 2008. Over the same period, funding resources for FDA’s process for reviewing NDAs and BLAs, an activity that receives both user fee funding and fiscal year appropriations, increased 58 percent. Specifically, funding increased from $437 million in fiscal year 2004 to $691 million in fiscal year 2008. During the same period, the number of full-time equivalent (FTE) staff supporting the agency’s review of ANDAs decreased 12 percent from 427 FTEs in fiscal year 2004 to 376 FTEs in fiscal year 2008. In contrast, the number of FTEs supporting the agency’s review of new drug and new biologic applications increased from 2,561 FTEs in fiscal year 2004 to 2,780 FTEs in fiscal year 2008. This increase in FTEs was solely due to an increase in the number of FTEs funded by user fees. Because counts of FTEs do not include contractors, these data do not fully represent FDA’s staffing resources for these activities. Between fiscal years 2004 and 2008, the number of medical product inspections the Food and Drug Administration (FDA) conducted decreased 17 percent—primarily due to a 19 percent decrease in the number of domestic inspections. Although the total number of inspections decreased, funding for inspections grew 16 percent overall, and the rate of funding increases for drugs and biologics inspections did not keep pace with inflation, as measured by the gross domestic product (GDP) price index. The agency conducted an increasing number of foreign inspections, which on average cost more than twice as much as each domestic inspection, and may explain why increased inspection funding supported a fewer number of total inspections. The total number of inspections FDA conducted for its medical product programs decreased from 7,589 inspections in fiscal year 2004 to 6,306 inspections in fiscal year 2008, a decline of 1,283 inspections or 17 percent. The total number of inspections conducted for each program decreased over the time period. (See fig. 9.) Between fiscal years 2004 and 2008, FDA decreased the number of domestic medical product program inspections conducted each year. FDA conducted 6,849 domestic inspections in fiscal year 2004 and 5,543 domestic inspections in fiscal year 2008—a decline of 19 percent or 1,306 inspections over the 5-year time period. FDA reduced the number of domestic inspections it conducted for each of the medical product programs between fiscal years 2004 and 2008. For the drugs program, FDA conducted 2,241 domestic inspections in fiscal year 2004 and 1,772 such inspections in fiscal year 2008—a decrease of 469 inspections or 21 percent. For the biologics program, FDA conducted 2,009 domestic inspections in fiscal year 2004 and 1,678 domestic inspections in fiscal year 2008, a decrease of 331 inspections or 16 percent. For the devices program, FDA conducted 2,599 domestic inspections in fiscal year 2004 and 2,093 domestic inspections in fiscal year 2008, a decline of 506 inspections or 19 percent. FDA conducted fewer domestic medical product inspections overall, although the agency increased the number of certain types of domestic inspections. For example, within the biologics program, FDA increased the number of domestic inspections of human cellular, tissue, and gene therapy products, and vaccines and allergenic products between fiscal years 2004 and 2008. In addition, FDA increased the number of domestic postmarket assurance device inspections it conducted over the 5 year period. (See table 10.) While the number of domestic inspections declined for medical product programs overall between fiscal year 2004 and fiscal year 2008, FDA increased the number of foreign inspections it conducted for the drugs and biologics programs. The total number of foreign inspections fluctuated from year to year, and in fiscal year 2008, FDA conducted a total of 763 foreign inspections—23 more than it did in 2004. For the drugs program, FDA conducted 374 foreign inspections in fiscal year 2004 and 452 such inspections in fiscal year 2008—an increase of 78 inspections. For the biologics program, FDA conducted 17 foreign inspections in fiscal year 2004 and 50 such inspections in fiscal year 2008—an increase of 33 inspections. For the devices program, FDA conducted 349 foreign inspections in fiscal year 2004 and 261 such inspections in fiscal year 2008—a decrease of 88 inspections. Despite increases in the total number of foreign inspections conducted over this time period, they constituted a small share—12 percent—of the total number of medical product program inspections in fiscal year 2008. In addition, FDA is only able to reach a small share of the total number of foreign establishments producing medical products for the U.S. market. In fiscal year 2008, FDA conducted inspections at 749 foreign establishments, which represented about 7 percent of the 10,158 total foreign medical product establishments registered with the agency that year. FDA conducted 17 percent fewer medical product inspections in fiscal year 2008 than it did in fiscal year 2004, although Office of Regulatory Affairs (ORA) funding for these inspections increased 16 percent during this period—from about $101 million to about $117 million. While the total number of medical product inspections FDA conducted decreased, the agency conducted more foreign inspections over this time period. FDA estimates that, on average, the cost of a foreign inspection is more than twice the cost of a domestic inspection. The agency’s increase in foreign inspections may explain why increased inspection funding supported fewer total inspections. Although funding for inspections was greater in fiscal year 2008 than in fiscal year 2004, it did not increase in each of these 5 years for each medical product program. Funding for the drugs and biologics program inspections remained relatively constant between fiscal years 2004 and 2005, decreased in fiscal year 2006, and increased in fiscal years 2007 and 2008. In fiscal year 2008, funding for drugs program inspections was 1 percent greater than it was in fiscal year 2004, and funding for biologics program inspections was 8 percent greater than it was in fiscal year 2004. These rates of increase in funding were substantially lower than the GDP rate of inflation between fiscal years 2004 and 2008 of 12 percent. For the devices program, funding remained relatively constant between fiscal years 2004 and 2005, increased in fiscal years 2006 and 2007, and remained relatively constant between fiscal years 2007 and 2008. Over this period, funding for devices program inspections increased 46 percent. (See fig. 10.) Although ORA funding for inspection activities increased between fiscal year 2004 and fiscal year 2008, the number of ORA full-time equivalent (FTE) staff devoted to medical product inspections declined 19 percent during this time period, from 844 FTEs in fiscal year 2004 to 684 FTEs in fiscal year 2008. Each of the medical product programs experienced a decline in FTEs conducting inspections during this time period. Compared to fiscal year 2004 FTE levels, in fiscal year 2008 there were 114 fewer FTEs devoted to drug inspections (a decline of 27 percent), 38 fewer FTEs devoted to biologics inspections (a decline of 19 percent), and 8 fewer FTEs devoted to device inspections (a decline of 4 percent). Most of the decreases in FTEs occurred between fiscal year 2004 and fiscal year 2006. (See fig. 11.) Although contractors do not perform establishment inspections, they may conduct activities that facilitate these inspections. Because counts of FTEs do not include contractors, these data do not fully represent FDA’s staffing resources for these activities. From fiscal years 2004 to 2008, the Food and Drug Administration (FDA) received an increasing number of adverse event reports for marketed medical products—substantially more reports than staff could review, according to FDA officials. While the total number of adverse event reports FDA received increased 81 percent over this time period, funding increased 154 percent and staffing resources increased 100 percent. Although FDA officials told us they received more adverse event reports than staff could review, the agency could not provide data showing the number of adverse event reports staff reviewed during this time period. From fiscal years 2004 to 2008, FDA received an increasing number of adverse event reports for marketed medical products. During this time period the number of drug-related adverse event reports FDA received increased 23 percent, from 426,016 reports in fiscal year 2004 to 522,871 reports in fiscal year 2008. An even bigger increase occurred in the receipt of biologic adverse event reports, which increased 86 percent, from 19,569 reports in fiscal year 2004 to 36,410 reports in fiscal year 2008. FDA saw the highest growth—210 percent—in device-related adverse event reports, with 189,450 reports received in fiscal year 2004 and 588,161 reports received in fiscal year 2008. As the number of adverse event reports for drugs, biologics, and devices grew between fiscal years 2004 and 2008, the number of reports that FDA considers to be serious increased 72 percent. Although FDA officials told us that they place the highest priority in reviewing serious adverse event reports, agency officials reported that they receive substantially more adverse event reports than staff can review. However, FDA could not provide data showing how many adverse event reports staff have reviewed. According to agency officials, the drug, biologic, and device adverse event reporting systems used by FDA do not allow the agency to accurately determine if an individual adverse event report has been reviewed by staff. FDA’s financial and staffing resources for the review of adverse event reports associated with the use of marketed medical products have grown from fiscal years 2004 to 2008. Overall, funding for adverse event reviews has increased 154 percent during this time period from about $31 million in fiscal year 2004 to about $78 million in fiscal year 2008. FDA experienced the greatest growth in financial resources for the review of drug-related adverse event reports with a 215 percent increase or from about $19 million in fiscal year 2004 to about $60 million in fiscal year 2008. Meanwhile, FDA saw the lowest increase in funding—53 percent— for the review of device-related adverse event reports, or from about $10 million in fiscal year 2004 to about $15 million in fiscal year 2008. Funding for adverse event reviews—in total, and for each program—grew at a rate faster than inflation over this time period as measured by the gross domestic product (GDP) price index. See figure 12 for trends in FDA funding for the review of adverse events related to drugs, biologics, and devices. Similar to the increase in funding for reviews related to drugs, biologics, and devices, the number of full-time equivalent (FTE) staff supporting the review of adverse event reports also increased from fiscal years 2004 through 2008. The largest growth in FTEs—248 percent or from 31 FTEs in fiscal year 2004 to 108 FTEs in fiscal year 2008—was for the review of drug-related adverse event reports. Over the same period, the number of FTEs for the review of adverse event reports related to biologics grew 17 percent, from 12 FTEs to 14 FTEs, while the number of FTEs for the review of device-related adverse event reports grew 9 percent, from about 40 FTEs to about 43 FTEs. Because counts of FTEs do not include contractors, these data do not fully represent FDA’s staffing resources for these activities. With the enactment of the Food and Drug Administration Amendments Act of 2007 (FDAAA), FDA was able to apply user fees collected through the Prescription Drug User Fee Act of 1992 (PDUFA), as amended, to support more postmarket safety activities for drugs, such as the review of adverse event reports. FDA attributes about two-thirds of the increase in funding and FTEs between fiscal years 2007 and 2008—142 percent and 40 percent respectively—for the review of drug-related adverse events to user fee funds. The Food and Drug Administration (FDA) faced an increasing workload related to its examination of advertising and promotional materials between fiscal years 2004 and 2008, particularly for drug-related promotions. Such promotions constitute the majority of advertising and promotional materials submitted. While the total number of final drug- related promotional materials FDA received increased 55 percent over the period, agency funding for the examination of these materials increased 167 percent and staff resources increased 26 percent. Although FDA officials noted that the agency did not have sufficient resources to examine all drug-related promotional materials submitted for review, FDA also could not provide information on the number of such materials staff reviewed. During fiscal years 2004 through 2008, FDA received an increasing number of advertising and promotional materials for examination from manufacturers; however, the agency did not track all of the drug- and device-related materials that it received or reviewed during this period. According to FDA officials, the agency was unable to examine all materials promoting drugs, although we found it did examine nearly all such materials for biologics. FDA officials also told us that they review all device-related promotional materials that are submitted. Drugs. FDA received an increasing number of advertising and promotional materials for examination between fiscal year 2004 and fiscal year 2008, but agency officials told us that staff were unable to review all materials submitted. FDA received an increasing number of voluntary draft submissions each year, with 429 submissions in fiscal year 2004 and 634 submissions in fiscal year 2008. In addition to receiving these voluntary draft submissions, FDA received a substantially greater and increasing number of final materials that manufacturers were required to submit. FDA received 45,394 final materials in fiscal year 2004 and 70,509 final materials in fiscal year 2008—an increase of 55 percent over the time period. FDA officials told us that the agency was unable to examine all of the promotional materials for drugs it received between fiscal year 2004 and fiscal year 2008 because it lacked the resources to do so. However, FDA could not provide data on the number of draft or final materials staff examined during this time. Biologics. We found that FDA received and examined an increasing number of draft and final advertising and promotional materials for biologics products between fiscal year 2006—the first year of available data—and fiscal year 2008. Specifically, our review of FDA data showed that the agency examined all 2,929 draft and final promotional materials submitted in fiscal year 2006, all 3,256 materials submitted in fiscal year 2007, and all but 17 of 4,480 materials submitted in fiscal year 2008. Most— over 90 percent—of the total number of materials submitted in each of these years were final promotional materials. Devices. An FDA official told us that the agency received very few promotional materials for devices between fiscal year 2004 and fiscal year 2008—device manufacturers are not required to submit these materials. The official also explained that although all materials received are examined, FDA could not provide data on the number of advertising and promotional materials submitted or examined during this period. Funding for FDA’s oversight of drug advertising and promotion increased 167 percent from about $4 million in fiscal year 2004 to about $10 million in fiscal year 2008. Funding for FDA’s oversight of biologics advertising and promotion also increased from $546,000 in fiscal year 2004 to $925,000 in fiscal year 2008. In contrast, funding for the agency’s oversight of devices advertising and promotion decreased from $590,000 in fiscal year 2004 to $452,000 in fiscal year 2008. The number of full-time equivalent (FTE) staff supporting FDA’s oversight of drug promotions grew 26 percent from 35 FTEs in fiscal year 2004 to 44 FTEs in fiscal year 2008. Over this period, the number of FTEs supporting the agency’s oversight of biologics promotions increased from 4 FTEs to 6 FTEs, while the number of FTEs supporting the agency’s review of devices promotions decreased from 5 FTEs to 4 FTEs. Because counts of FTEs do not include contractors, these data do not fully represent FDA’s staffing resources for these activities. In addition to the contact named above, Geri Redican-Bigott, Assistant Director; Kye Briesath; Cathy Hamann; Rebecca Hendrickson; Richard Lipinski; Emily Loriso; Kevin Milne; Lisa Motley; and Patricia Roy made key contributions to this report. Information Technology: FDA Needs to Establish Key Plans and Processes for Guiding Systems Modernization Efforts. GAO-09-523. June 2, 2009. High-Risk Series: An Update. GAO-09-271. Washington, D.C.: January 2009. Food Safety: Improvements Needed in FDA Oversight of Fresh Produce. GAO-08-1047. Washington, D.C.: September 26, 2008. Drug Safety: Better Data Management and More Inspections Are Needed to Strengthen FDA’s Foreign Drug Inspection Program. GAO-08-970. Washington, D.C.: September 22, 2008. Food Labeling: FDA Needs to Better Leverage Resources, Improve Oversight, and Effectively Use Available Data to Help Consumers Select Healthy Foods. GAO-08-597. Washington, D.C.: September 9, 2008. Prescription Drugs: FDA’s Oversight of the Promotion of Drugs for Off- Label Uses. GAO-08-835. Washington, D.C.: July 28, 2008. Medical Devices: FDA Faces Challenges in Conducting Inspections of Foreign Manufacturing Establishments. GAO-08-780T. Washington, D.C.: May 14, 2008. New Drug Development: Science, Business, Regulatory, and Intellectual Property Issues Cited as Hampering Drug Development Efforts. GAO-07-49. Washington, D.C.: November 17, 2006. Food and Drug Administration: Effect of User Fees on Drug Approval Times, Withdrawals, and Other Agency Activities. GAO-02-958. Washington, D.C.: September 17, 2002. FDA Resources: Comprehensive Assessment of Staffing, Facilities, and Equipment Needed. GAO/HRD-89-142. Washington, D.C.: September 15, 1989. | Twenty years ago, GAO reported that the Food and Drug Administration (FDA) was concerned that it lacked resources to fulfill its mission, which includes oversight of the safety and effectiveness of medical products--human drugs, biologics, and medical devices--marketed for sale in the United States. Since then, FDA, GAO, and others have raised concerns regarding FDA's ability to meet its oversight responsibilities. GAO was asked to review the resources supporting FDA's medical product oversight responsibilities. GAO examined trends in (1) FDA's funding and staffing resources for its medical product oversight responsibilities from fiscal years 1999 through 2008, and (2) FDA's medical product oversight responsibilities during this same period. GAO analyzed FDA data on the agency's resources and workload, reviewed relevant federal laws, and interviewed FDA officials. GAO also examined more-detailed data on FDA's fiscal year 2004 through 2008 resources and workload in four key areas, representing a range of FDA's oversight responsibilities, both before and after a medical product is marketed in the United States. Funding and staffing resources for FDA's medical product programs increased between fiscal years 1999 and 2008, primarily as a result of increased user fees paid by industry, which are made available through appropriations acts to support the agency's processes for reviewing new medical products. Total funding increased from about $562 million in fiscal year 1999 to about $1.2 billion in fiscal year 2008, with user fee funding accounting for more than half of this increase. A large and growing portion of funding supported activities for which user fees are collected, resulting in a declining share of funding available for other activities. FDA officials said that this has seriously limited the agency's ability to fulfill its oversight responsibilities in some areas, particularly those not funded with user fees. FDA faced challenges fulfilling and managing its growing medical product oversight responsibilities, which agency officials attributed to resource constraints. FDA's statutory responsibilities grew during this period and a growing number of medical products subject to FDA oversight and establishments manufacturing these products for the U.S. market also added to the agency's workload. However, FDA could not provide data showing its workload and accomplishments in some areas, such as its review of reports identifying potential safety issues with specific medical products. Without such information, FDA cannot develop complete and reliable estimates of its resource needs. While FDA officials said that the funding amounts requested for and provided to FDA during the past 2 years will permit the agency to respond to its most urgent needs and priorities, officials also noted that they did not receive enough resources to meet some statutory requirements, such as biennially inspecting certain manufacturing establishments. Furthermore, officials said that the agency faces significant challenges fulfilling its mission to oversee the safety and effectiveness of medical products. |
GPRAMA required OMB to establish a single, performance-related website by October 1, 2012. The site would provide program and performance information readily accessible and easily found on the Internet by the public and members of Congress. OMB made Performance.gov available to the public in August 2011. OMB’s stated goals for Performance.gov include providing (1) a public view into government performance to support transparency, and (2) executive branch management capabilities to enhance senior leadership decision making. Performance.gov is a repository of federal government performance information. One of OMB’s main goals of the website is to link agency programs, their relationships, and contributions to strategic objectives. This is intended to increase the utility of this information through enhanced agency comparisons, supporting both benchmarking and best practice identification. GPRAMA lists the data elements that are required to be reported on Performance.gov, including quarterly updates for APGs and CAP goals. OMB guidance provides more detailed direction to facilitate the submission of that information. APGs are target areas where agency leaders want to achieve near-term performance increases. APGs are often referred to as the agencies’ highest priority performance goals. CAP goals are outcome-oriented goals covering a limited number of crosscutting policy areas, as well as goals to improve management across the federal government. These goals are intended to cover areas where increased cross-agency collaboration is needed to improve progress towards shared, complex policy, or management objectives. This includes attracting foreign investment to spur job growth and enabling agencies to recruit and hire the best talent. In March 2013, federal agencies added the first performance updates for CAP goals and APGs to Performance.gov. Several entities maintain and operate Performance.gov. Figure 1 provides an overview of the Performance.gov governance structure. Office of Management and Budget. OMB is responsible for setting the direction, vision, policy, and guidance of Performance.gov and ensuring its effective operation. OMB has partnered with GSA and the PIC, and has contract support from eKuber Ventures Inc. to provide key services for the site. OMB also collaborated with GSA to establish a Performance Management Line of Business (PMLOB) to further guide the administration of Performance.gov. Agencies. GPRAMA requires that agencies make their respective strategic plans, performance plans and reports, and information about their APGs (as applicable), including quarterly updates, available for publication on Performance.gov. Twenty-two agencies have web pages on Performance.gov that provide links to their strategic plans, annual performance plans, and annual performance reports; report agency progress on government-wide management initiatives; and show agency contributions to the CAP goals. There are 31 additional agencies that do not have a dedicated web page on Performance.gov. Instead, they provide links to their strategic plans or to their agency plans and reports pages. Agencies submit their annual and, as applicable, quarterly performance information for publication on the website through the Performance Reporting Entry Portal (PREP) System. While OMB and PIC give feedback on updates to the information or suggest changes, ultimately, the agency decides what information is published on its Performance.gov page. General Services Administration. GSA builds the technical platform, provides project management of Performance.gov, and determines the business requirements and priorities. Performance Improvement Council. The PIC is chaired by OMB’s Deputy Director for Management and is composed of Performance Improvement Officers (PIO) from each of the 24 Chief Financial Officers (CFO) Act agencies as well as other PIOs and individuals designated by the Chair. The PIC facilitates the exchange of useful practices to strengthen agency performance management, such as through cross-agency working groups. The PIC is supported by an Executive Director and a team of eight full-time staff who conduct implementation planning and coordination on crosscutting performance areas. In coordination with OMB, the PIC provides several types of guidance to agencies. It also trains agency officials responsible for updating the quarterly information on Performance.gov and provides liaisons to answer those officials’ questions. Contractor (eKuber Ventures Inc.). In August 2015, GSA awarded software services company eKuber Ventures a contract to provide operations support and maintenance of Performance.gov and the PREP system. According to the contract, eKuber will provide training and information technology help desk support. The current contract runs through August 2020. Performance Management Line of Business (PMLOB). PMLOB is an interagency effort to develop government-wide performance management capabilities to help meet GPRAMA transparency requirements. It is also designed to support government-wide performance management efforts. PMLOB’s key objectives, according to its 2013 charter, include, among others, developing Performance.gov into a GPRAMA-compliant data tool. When first established, Performance.gov was funded by GSA’s E- Government Fund. Beginning in 2013, this funding was supplemented by agency fees. According to the PMLOB Program Charter and PIC staff, E-Government funding (now funding from the Federal Citizen Services Fund) was to be primarily used for enhancing the site, developing new functionality, and focusing on specific areas identified by management. However, according to agency staff, due to limited funding, they are focusing on system maintenance rather than new development. Staff also said Performance.gov received $1,029,922 from the Federal Citizen Services Fund for fiscal year 2016. However, OMB and PIC staff told us GSA used $700,000 for other purposes leaving $329,922 for the website. Additional funding for Performance.gov is collected from 15 agencies through interagency agreements with GSA. These agreements document the services GSA will provide and the fees the agencies will pay. Through the interagency agreements, GSA collects approximately $795,000 annually—$53,000 from each of the 15 agencies. According to the PMLOB charter, agency fees are to cover the data collection capabilities and some operational and maintenance costs. PIC staff told us they plan to request a 4 percent increase in agency fees in fiscal year 2017 to meet the increasing costs for operating the website. Digitalgov.gov contains a checklist of requirements for federal websites and digital services that are based on relevant statutes, regulations, executive orders, or policy documents. Digitalgov.gov is managed by GSA and is designed to help agencies provide digital services and information to the public. Our review focused on requirements related to customer feedback and outreach, usability, performance measures, and records management. See appendix II for additional information on our selected requirements. While OMB, GSA, and the PIC took several steps to improve Performance.gov, their actions neither meet Digitalgov.gov or GPRAMA requirements, nor completely address our prior recommendations. For example, OMB, GSA, and the PIC collect some customer feedback, but have not engaged broader audiences. While GSA, on behalf on OMB, also conducted a usability test, OMB has not addressed all of the findings from that test. Further, OMB and the PIC track 18 of 24 recommended performance measures, but have not set goals for those measures. In addition, our prior work identified several areas where OMB is not fully meeting GPRAMA requirements for Performance.gov. This includes making all the required information for APGs, CAP goals, and the federal program inventory available on the website. OMB has not yet implemented the recommendations we made related to these findings. OMB, GSA, and PIC staff took some steps to prioritize, store, and address user feedback for the website. According to Digitalgov.gov requirements, agencies need to understand the needs of their customers by collecting and addressing customer feedback, and use those data and feedback to continuously improve programs. Moreover, a focus of GPRAMA is to make federal performance information more accessible to the public. The following methods are used to collect customer feedback and improve the website: Website survey. PIC staff told us that a website survey is configured to appear on Performance.gov for about 20 percent of visits. The website survey asks users to rate their overall experience on Performance.gov, how likely they are to return, and how likely they are to recommend it to someone else, among other things. Survey results from October 2014 to December 2015 showed that 48 percent of survey respondents rated their overall experience on Performance.gov an 8 out of 10 or above. Of the 568 respondents during this period, 69 percent were likely or very likely to return to the website, and 61 percent of respondents were likely or very likely to recommend Performance.gov to someone else. Feedback button. OMB and the PIC also collect feedback through the “Feedback” button on Performance.gov’s home page. The “Feedback” button leads the user to a web form that sends feedback directly to GSA, as shown in figure 2. Examples of feedback submitted through the form include: notification of broken links, comments about outdated agency information, and questions about where specific information can be found on the website. PREP working group. OMB staff stated that they have been focused on enhancing the internal PREP system that is used by agency officials to submit the annual and quarterly performance information required to be on the website by GPRAMA. To help in this effort, the PREP working group—a voluntary group of agency officials responsible for updating their respective agency’s performance reporting information—provides feedback to the contractor on the PREP users’ needs. For example, the working group members were given the opportunity to test updates to the PREP system and suggest improvements. According to PIC staff, feedback submitted through the website survey, “Feedback” button, PREP working group, and e-mails to the Performance.gov help desk are prioritized and logged into the product backlog, a system used by GSA to store feedback. According to PIC staff, feedback is prioritized based on several factors, including the value it would bring to a larger audience, the cost and estimated time to implement, and the risk and opportunity cost of addressing the feedback. The highest priority items on the product backlog system are placed on the monthly prioritized list and are handled by the contractor. Examples of addressed feedback from the product backlog include modifying graphs to accurately display APG data and updating Performance.gov’s Frequently Asked Questions page to reflect current information. GSA, on behalf of OMB, conducted a usability test on Performance.gov and issued the findings in September 2013. Digitalgov.gov recommends conducting usability testing to collect general feedback from users about the design and functionality of a website, offering invaluable insights into what needs improvement. Further, Digitalgov.gov states that simple usability tests are a quick way to identify major problems and give agencies the tools to take immediate action to improve the website. The September 2013 usability test report found several problems with Performance.gov and made recommendations to improve the usability of the website, detailed in table 1. The specific findings of the usability test and status of actions taken to address those findings are as follows: Accessibility. The 2013 GSA usability test found portions of Performance.gov that were inaccessible to people with visual disabilities. According to OMB staff, changes were made to the website to address the identified accessibility issues consistent with Section 508 requirements. PIC staff also told us that when the PIC adds new content to the website, it is tested for accessibility with tools such as screen readers. Agencies are responsible for ensuring the content they submit to the website is accessible, such as the quarterly updates. Digitalgov.gov’s guidance on Section 508 states that accessibility testing should be conducted before a web page launches or when making significant changes to digital products and services. Purpose. Some usability testers who participated in the 2013 GSA usability test also reported that they were confused about the purpose of the website because the main page did not clearly explain either Performance.gov’s purpose or the target audience. Digitalgov.gov plain writing requirements state that content should be written in a clear, concise, and well-organized manner. We reviewed related federal websites and found examples of clearly stated purposes on the home page. Figure 3 compares the DATA.gov home page to that of Performance.gov. Data.gov has a simple explanation of the site and uses icons to depict the available topics. Performance.gov has two large paragraphs of text explaining the benefits of establishing APGs and CAP goals, but it does not clarify its purpose as a central, government-wide website where users can find these APGs and CAP goals for federal agencies. Performance.gov’s lack of a simple and clear explanation of its purpose could potentially confuse users. Without guidance about tasks that can be accomplished on a website, along with explanations of the different areas of the website and navigation assistance, website users may be unable to successfully achieve their objectives. Data visualizations. The GSA usability test also reported that some users had trouble locating graphs or data visualizations, and understanding whether agencies had met target goals. For example, the usability test noted that users wanted a “goal line” showing target values. In 2013, we also reported on the importance of making the information and data on a performance-reporting website engaging and easily understandable. Figure 4 shows how “Measures of Australia’s Progress,” a website designed to show users how Australia is progressing, provides color-coded indicators of how a performance metric is performing relative to the goal. In contrast, Performance.gov does not have color-coded indicators or other data visualizations that help users understand if agencies are meeting their goals. Search. The 2013 GSA usability test also examined how well the home page’s search function performed. The test revealed that search terms were not highlighted in search results and results of the search did not necessarily match the search terms. For example, when users searched for the “National Institutes of Health,” search results returned the home page for the National Science Foundation instead of pages directly related to the National Institutes of Health. PIC staff told us that the search algorithm was modified to highlight the search terms in the search results. However, the algorithm has not been updated to return search results more reflective of the search terms. The contract with eKuber discusses an option of improving the search function. However, OMB staff told us they have not decided whether they will invest in enhancing the website’s search capability. OMB staff told us that they have not implemented all the usability test recommendations because of limited resources. Additionally, while the eKuber contract, awarded August 19, 2015, has options to address some usability issues, OMB has not prioritized which usability issues need to be addressed first or a timeline for addressing these issues. Specifically, the contract contains an option to enhance the usability of the site, which GSA can exercise at a later date during the contract period. These contract options include improving the website search function and enhancing data visualizations on Performance.gov, among other things. In 2013, we also found that some Performance.gov users reported issues with accessibility, navigation, and search capabilities. Specifically, we found that OMB had not articulated ways that intended audiences, such as members of the public, Congress, and agency staff, could use the website or the information available through it to accomplish other specific tasks. For example, the website gave no indication or examples of the ways that various audiences could use Performance.gov to facilitate coordination or communication about goals, activities, and performance between agencies. At that time, we recommended OMB clarify ways that intended audiences could use the information on Performance.gov to accomplish specific tasks and identify the design changes that would be required to facilitate that change. OMB agreed with our recommendation and subsequently conducted the usability test. Although the actions OMB has taken are a step in the right direction, they do not fully address our prior findings with regard to how people could use the website to complete specific tasks. As a result, our prior recommendation remains open. Thus, if usability issues are not addressed, Performance.gov users could continue to have difficulties using and understanding the content posted on the site. Addressing usability issues could also help resolve another open recommendation from our 2013 report concerning engaging wider audiences—such as congressional staff and interested members of the public—to ensure the site is meeting a broad set of user needs. Specifically, in 2013, we recommended that OMB seek to more systematically collect information on the needs of a more varied audience, including through the use of customer satisfaction surveys and other approaches recommended by leading practices. OMB also agreed with this recommendation. While the efforts OMB and the PIC have taken on usability testing and collecting and using customer feedback are steps in the right direction, these actions do not completely address our prior recommendation on engaging a potentially broader audience about the website’s usefulness. As a result, our previous recommendation remains open. Digitalgov.gov requirements state managers should track, analyze, and report on a minimum baseline set of performance, search, customer satisfaction, and other measures. This allows staff to get a holistic view of how well online information and services are delivered. As of May 2016, OMB and the PIC were tracking 18 of the 24 recommended Digitalgov.gov measures. PIC staff told us that they began using the Digital Analytics Program (DAP) to track performance measures for Performance.gov in March 2014. A DAP staff member further explained that the web analytics program is not currently customized to track the six remaining measures for Performance.gov. This represents some improvement from 2013, when we found OMB and GSA monitored visitors’ use of Performance.gov by tracking 15 of the 24 website performance measures. At that time, we recommended that OMB seek to ensure that all performance, search, and customer satisfaction measures, consistent with leading practices, are tracked for the website, and, where appropriate, OMB should create goals for those measures to help identify and prioritize potential improvements to Performance.gov. While OMB agreed with the recommendation, it still does not track all the measures we recommended and that are required by Digitalgov.gov. OMB and the PIC are now tracking four customer satisfaction measures that were not tracked in 2013. However, two other measures related to searches—”top referring search terms” and “percentage of visitors using site search”—that OMB and the PIC tracked in 2013 are not being tracked in 2016. Table 2 shows the performance measures tracked for Performance.gov in 2013 as compared to the performance measures tracked in 2016. Additionally, OMB and the PIC have not set goals or targets for any of the measures they are tracking. Under the strategic planning requirements established under GPRA and enhanced by GPRAMA—which can also serve as leading practices for planning for individual initiatives—agencies are to establish performance goals to define the level of performance to be achieved. In addition, agencies are required to establish performance indicators to assess the progress towards the goal, and later evaluate whether the goal has been met. Furthermore, our 2013 report found that goals or targets had not been established for the performance measures the agency was tracking. We recommended that OMB develop goals or targets for those measures. Our prior recommendation remains open. PIC staff told us that goals were not set for Performance.gov performance measures, and improvements have not been made because of limited staffing resources. In February 2016, the PIC hired a new Digital Services Director who will be responsible for reviewing performance measures tracked by DAP and making recommendations for changes to the website accordingly, among other things. Without tracking all recommended search and customer satisfaction measures, and establishing goals or targets for these measures, it will be difficult for OMB and the PIC to know if they are meeting customer needs and if they are delivering information to identify and prioritize potential improvements to the website. Our prior work has found that OMB has not met all of the GPRAMA public reporting requirements for Performance.gov. In particular, our work identified several areas where OMB was not fully meeting APG and CAP goal public reporting requirements. Additionally, the inventory of federal programs had not been updated on Performance.gov since the inventory’s initial release in May 2013. Based on recent communications with OMB staff, these issues have not been fully resolved. APGs. In September 2015, we reported that Performance.gov provided limited information on the quality of performance information used to measure progress on selected APGs. GPRAMA requires agencies to publicly report on how they are ensuring the accuracy and reliability of the performance information they use to measure progress towards these APGs. The six agencies we reviewed for the 2015 report used various sections of Performance.gov to discuss some of the performance information quality requirements for APGs. But none of the agencies met all five GPRAMA requirements for their individual APGs. Moreover, while there is no place on the website that is set aside to discuss the quality of performance information for each APG, we found hyperlinks on Performance.gov to the selected agencies’ performance plans and reports. However, there was no explanation of where to find discussions on the quality of performance information in these plans and reports. We concluded that while OMB had directed agencies to discuss the quality of APG performance information in their annual performance plans and reports for several years, the selected agencies’ plans and reports often did not. We made two recommendations to OMB aimed at improving public reporting. We recommended that OMB, working with the PIC, identify practices participating agencies can use to improve their public reporting in their performance plans and reports of how they are ensuring the quality of performance information used to measure progress towards APGs. PIC staff took steps to address this recommendation. In May 2016, PIC staff reported that they had summarized the self-assessments completed by the performance improvement officers (PIOs) and their deputies on their agency’s data quality policies and procedures. The PIC staff summary we reviewed identified aspects of data quality in which agencies had generally rated their performance highest, and other aspects of data quality in which agencies had rated their performance lowest. We believe the PIC’s efforts should help PIOs examine their agency’s data quality policies and procedures, and ultimately improve data quality and the information provided to external audiences. We also recommended that OMB, working with the PIC, identify additional changes that are needed for its guidance to agencies on ensuring the quality of performance information for APGs on Performance.gov. As of June 2016, OMB has not updated its guidance. CAP goals. In May 2016, we reported that OMB and the PIC had incorporated lessons learned from the prior CAP goal period (2012-2014) and provided ongoing assistance to CAP goal teams. Based in part on our June 2014 recommendation, OMB and the PIC updated guidance and developed a new reporting template to help improve public reporting on the implementation of CAP goals. OMB and the PIC also implemented strategies to build agency capacity to work across agency lines, such as assigning agency goal leaders, providing ongoing guidance and assistance to CAP goal teams, and holding senior-level reviews. We also found that the selected CAP goal’s quarterly progress updates— published on Performance.gov—met a number of GPRAMA reporting requirements, including identifying contributors and reporting strategies for performance improvement and quarterly results. Furthermore, in our May 2016 report, while we found that most of the selected CAP goal teams were consistently reporting the status of quarterly milestones to track goal progress, they had not established quarterly targets as required by GPRAMA. Also, all of the selected CAP goal teams reported that they were working to develop performance measures. While they were at various stages of the process, they were not reporting on these efforts consistently. In that report, we recommended that OMB and the PIC report on Performance.gov the actions that CAP goal teams are taking, or plan to take, to develop performance measures and quarterly targets. OMB staff generally agreed with the recommendation. With improved reporting of performance information, the CAP goal teams will be better positioned to demonstrate goal progress at the end of 4-year goal period. OMB has not yet confirmed the specific actions it plans to take in response to this recommendation. Federal program inventory. In October 2014, we found the approach used by OMB and agencies to develop an inventory of all federal programs along with related budget and performance information had not fully met GPRAMA requirements. GPRAMA requires OMB to make this information publicly available on Performance.gov. PIC staff reported that the federal program inventory requirement was initially met in fiscal year 2013 by presenting data from agency plans and reports as PDF attachments on Performance.gov. However, we found that OMB had not published an inventory of federal programs on Performance.gov since 2013. Moreover, OMB did not expect an update of the program inventories to happen before May 2017 because the staff who would work on the program inventories were heavily involved in Digital Accountability and Transparency Act of 2014 (DATA Act) implementation. Further, in October 2014, we reported that implementation of the DATA Act could be tied to the program inventories because the act requires federal agencies to publicly report information about any funding made available to, or expended by, an agency or a component of the agency at least quarterly. We also found that agency reporting for both sets of requirements was web based. This could more easily enable linkages between the two or facilitate incorporating information from each other. In July 2015, we recommended that OMB should accelerate efforts to determine how best to merge DATA Act purposes and requirements with the GPRAMA requirement to produce a federal program inventory. In April 2016, PIC staff told us that they will work with the DATA Act implementation team to determine how to best integrate the GPRAMA and DATA Act requirements. However, they did not provide specific details. Until OMB determines a strategy to integrate GPRAMA and DATA Act requirements, OMB will not know what resources or steps it needs to take to ensure the requirements are met and incorporated on Performance.gov. OMB and PIC officials have told us that they are focused on ensuring Performance.gov is GPRAMA compliant and are aware that the website is not fully consistent with GPRAMA requirements. Without all the required GPRAMA information, Performance.gov will not be transparent and may fall short of meeting user needs. OMB does not have a strategic plan for the website that will help guide officials in the future. Specifically, we found that OMB does not have a customer outreach strategy that incorporates, as appropriate, information about how OMB intends to (1) inform users of changes on Performance.gov, (2) use social media as a method of communication, and (3) use mobile devices and applications. Finally, OMB lacks an archiving plan that details how OMB plans to manage the data and content on Performance.gov. OMB has not developed a strategic plan to guide the future of Performance.gov. Agency-wide strategic planning practices required under GPRA, and enhanced by GPRAMA, can serve as leading practices for planning at lower levels within federal agencies, such as individual programs or initiatives. Under this law, strategic plans are the starting point and basic underpinning for results-oriented management. Among other things, strategic plans should contain goals and objectives, approaches, and resources needed to achieve those long-term goals and objectives. When we began our review, OMB staff said that they had not developed a strategic plan for Performance.gov because a new contractor, eKuber, had just started a few months prior, and they wanted to allow time for the contractor to transition into its new role. eKuber took over the Performance.gov contract in the late summer of 2015. Since that time, OMB and the PIC have taken an important first step towards developing a strategic plan by hiring a Digital Services Director in February 2016. The Digital Services Director’s responsibilities include developing a strategic plan and managing the long-term development of Performance.gov. Hiring a new Director who has responsibility for outlining needed improvements is a move in the right direction. However, without a plan for the future, OMB will not know what resources it will need or steps it needs to take to ensure all requirements are met and incorporated on Performance.gov. Such a plan could prove especially valuable in maintaining continuity during the upcoming presidential transition. OMB and the PIC have not developed a customer outreach strategy. The Digital Government: Building a 21st Century Platform to Better Serve the American People strategy calls for federal websites to become more customer centric by responding to the needs of users and making it easier to find and share information, and accomplish important tasks “anytime, anywhere, any device.” It also calls on agencies to embrace new technologies to drive participation and to develop innovative, transparent, user-facing products and services efficiently and effectively. Without developing a user outreach plan, OMB risks being unable to provide services to its users where they need it most. Specifically, we identified three areas where Performance.gov did not have a customer outreach strategy that incorporates, as appropriate, information about how OMB intends to (1) inform users of changes on Performance.gov, (2) use social media as a method of communication, and (3) use mobile devices and applications. Performance.gov does not inform users when a new quarterly update has been published. Website usability guidelines created by GSA in conjunction with Health and Human Services (HHS) call for websites to inform users when changes are made. Most of the time, OMB publishes a blog post alerting users that new quarterly data have been updated; however, this blog post is made available through www.WhiteHouse.gov, not Performance.gov. While a blog post on WhiteHouse.gov will inform some users that the data have been updated, more Performance.gov users would be informed of the updates if they were cross posted to the Performance.gov home page. The lack of alerts is particularly problematic because OMB does not always meet the deadlines published in OMB Circular No. A-11 (which provides guidance to agencies) for the APG and CAP goal quarterly updates. For example, in the 2015 version of OMB Circular No. A-11, OMB estimated the update for the first quarter of 2016 would be published around March 17, 2016. However, these data were not published until March 30, 2016. PIC staff told us that, in some cases, OMB held the publication date until it could be coordinated with a press release or a blog posting, which was the case for this quarterly update. The delay in publishing the quarterly updates highlights the need to inform users of when the website has been updated. Other federal agencies have updated users about new information on their website by including a “Latest News” or “Updates” section on their home page or by implementing date and time stamps. For example, the U.S. Census Bureau home page provides users with a “Latest News” section, along with a calendar of events for the upcoming week, to allow users to easily access the most up-to-date information (see figure 5). OMB and PIC staff said that most of the site’s users are other federal agencies and employees who already generally know when information will be updated. The public and members of Congress, however, would not know when information has been updated. In May 2016, PIC staff told us they are considering a number of options to inform users of changes on Performance.gov. This includes a banner on the home screen identifying new information or using time stamps and a timeline to highlight quarterly updates, among other options. While PIC staff are in the early stages of planning, we continue to believe communicating changes on the website will help enhance the usability for all Performance.gov users. Without a systematic method for informing users of when the website has been updated, OMB and the PIC are missing an opportunity to disseminate information to a broader base of users and meet users where they are. On a related issue, in some instances, we were unable to determine when some information on the website was last updated. For example, we reviewed the content on seven web pages–Overview, Strategies, Progress Update, Next Steps, Indicators, Continuing Programs and Other Factors, and Related Goals–associated with 22 APGs. We found that none of them had a time or date stamp of when the information was last updated. Date or time stamps can provide users with a clear idea of when a web page was last reviewed or updated. This increases a website’s transparency. We also reviewed the Progress Update page of each of the 22 APGs. On three of those pages, we were unable to identify any text that would tell the reader when the page was last updated, such as the quarter or year of the update. According to PIC staff, the information on the APG pages always reflects the most current information. As previously noted, GPRAMA requires the APGs to be updated quarterly. Without a date or time stamp or some indication of when the data were last updated, system users, including decision makers, are unable to determine how current the website’s data are. Performance.gov provides users with social media links to share information on the website. However, OMB and the PIC are not using social media services, such as Twitter or Facebook, to interact with the site’s users. Digitalgov.gov’s social media requirement states that staff should use social outreach tools to interact with customers and improve the customer experience. Furthermore, we found that other federal websites have social media pages and updates linked on the home page. For example, the U.S. Department of Housing and Urban Development (HUD) website has a feed on its home page showing tweets from its Twitter account. These tweets provide users with the latest information from HUD, including policy updates and recent news and events related to HUD’s mission (see figure 6). According to OMB and PIC staff, they have not used social media for customer outreach because they have limited staffing resources to manage a social media strategy. However, in May 2016, OMB and PIC staff told us they are planning to hire a Communications Advisor who will be responsible for creating a social media strategy, among other tasks. This is an important first step. Moving forward with a social media strategy should help publicize Performance.gov and improve user experience. Without such a strategy, OMB and the PIC are missing an opportunity to interact with customers and to improve the customer experience. Performance.gov is accessible on mobile devices, such as mobile phones and tablets, as Digitalgov.gov recommends. The Performance.gov home page and the eight main tabs are accessible and readable on a mobile device. We were able to access agency information and read about strategic goals and APGs on a mobile device. However, OMB does not have a mobile application (also known as an app) for Performance.gov. The U.S. Digital Services Playbook outlines key successful practices from the private sector that would help the government build effective digital resources, including mobile applications. According to the playbook, it is important for staff to understand how users access the website, whether it is through a computer or mobile device. Digital products may better cater to how the users interact with the website. OMB and PIC staff told us they have not determined whether it will be beneficial to develop a mobile application for its users. Considering the need for a mobile application when developing a customer outreach plan may help OMB determine future resource requirements. As of the end of February 2016, OMB and the PIC no longer maintain a full archive of Performance.gov. This means that the data that were previously published as a part of the 2012-2013 APGs are no longer available to the public. Additionally, on an ongoing basis, OMB and the PIC do not publish the progress updates from a prior quarter’s update for any APG, once the newest quarter’s data are released. However, previous iterations of CAP goal progress have been maintained for the public to access via the CAP goal update pages. PIC staff told us that the platform used to previously archive the site was no longer supported. PIC staff told us they would like to migrate the archived data back onto Performance.gov so that it can be accessed publicly again, if funding for the project becomes available. The Digitalgov.gov requirement for records management cites National Archives and Records Administration (NARA) guidance on managing web records, which is based on statutory requirements. This guidance states federal websites are part of an agency’s approach to serving the public and agencies should conduct risk assessments to determine what parts of their websites should be documented and have records kept. Once an agency determines which content to keep or archive, staff should then develop a web records schedule to document and store that content. The PIC does not have a web records schedule to determine how to manage Performance.gov content and data. Instead, the PIC told us GSA is instituting a new method of managing and tracking the progress of its web projects. A web records schedule would provide stakeholders with important information about plans to archive the data and content published on Performance.gov. Further, without an archive of Performance.gov, users can no longer compare long-term agency priority goals and progress made toward those goals. This affects the website’s transparency. Since Performance.gov was launched in 2011, OMB has worked with the PIC and GSA to develop a single website that will meet federal requirements for the public reporting of agency performance information. While OMB, GSA, and the PIC have taken several steps to improve Performance.gov, their actions do not fully meet Digitalgov.gov requirements or completely address our prior recommendations. For example, while OMB and GSA conducted a usability test of the website, they have not addressed all of the test’s findings. Further, OMB has experienced several challenges implementing APG and CAP goal reporting, and the federal program inventory requirements outlined in GPRAMA. Without improving usability and fully implementing GPRAMA requirements, Performance.gov will have difficulty serving its intended purpose as a central website where users can find government-wide performance information easily. OMB has not developed a strategic plan for Performance.gov. OMB and the PIC took an important first step by hiring a Digital Services Director for the PIC. OMB now needs to outline the goals and objectives, approaches, and resources needed for the future development of Performance.gov. Further, OMB does not have a customer outreach strategy that explores additional ways to display updated information on the Performance.gov home page and to use social media and mobile applications for outreach. Additionally, outlining a plan to manage and archive the content and data on Performance.gov in a systematic way will increase the transparency of the website. Without a strategic plan that incorporates all of these areas, it will be difficult for decision makers to prioritize and plan for future website improvements. Such a plan could prove especially valuable in maintaining continuity during the upcoming presidential transition. We recommend the Director of the Office of Management and Budget, in consultation with the Performance Improvement Council and General Services Administration, take the following three actions: 1. Ensure the information presented on Performance.gov consistently complies with GPRAMA public reporting requirements for the website’s content. 2. Analyze and, where appropriate, implement usability test results to improve Performance.gov. 3. Develop a strategic plan for the future of Performance.gov. Among other things, this plan should include: the goals, objectives, and resources needed to consistently meet Digitalgov.gov and GPRAMA requirements; a customer outreach plan that considers how (1) OMB informs users of changes in Performance.gov, (2) OMB uses social media as a method of communication, and (3) users access Performance.gov so that OMB could, as appropriate, deploy mobile applications to communicate effectively; and a strategy to manage and archive the content and data on Performance.gov in accordance with NARA guidance. We provided a draft of this report to the Director of OMB and the Administrator of GSA for review and comment. On August 5, 2016, we met with OMB, PIC and GSA staff. OMB and PIC staff provided us with oral comments on the report and we made technical changes as appropriate. OMB staff agreed with the recommendations in the report. GSA did not have comments on the report. We are sending copies of this report to the Director of OMB and the Administrator of GSA as well as appropriate congressional committees and other interested parties. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-6806 or mihmj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Status update 1. The agency has not yet provided reporting on Cross Agency Priority (CAP) goal progress, we recommend that the Director of the Office of Management and Budget (OMB), working with the Performance Improvement Council (PIC), take the following action: report on Performance.gov the actions that CAP goal teams are taking, or plan to take, to develop performance measure and quarterly targets. information on what actions it has taken in response to this recommendation. 1. To help participating agencies improve 1. We reviewed updates OMB published to their public reporting, the Director of OMB, working with the PIC Executive Director, should identify additional changes that need to be made in OMB’s guidance to agencies related to ensuring the quality of performance information for agency performance goals (APG) on Performance.gov. Circular No. A-11 in July 2016. Circular No. A-11 continues to direct agencies to provide data quality information for publication on Performance.gov or to provide a hyperlink from Performance.gov to relevant explanation in agencies' performance reports, which was a requirement OMB added in June 2015 in response to our preliminary findings. However, our review found that OMB also needed to update the template agencies complete for Performance.gov updates to make it easier for agencies to publish this information on the website. The July 2016 update of Circular No. A-11 does not indicate whether this template has been updated or whether additional changes to A-11 are needed. We will continue to monitor OMB and the PIC’s efforts to address our recommendation. 1. To ensure that federal program spending 1. data are provided to the public in a transparent, useful, and timely manner, the Director of OMB should accelerate efforts to determine how best to merge Digital Accountability and Transparency Act (DATA Act) purposes and requirements with the GPRA Modernization Act of 2010 (GPRAMA) requirement to produce a federal program inventory. In April 2016, OMB staff told us that identifying programs for the purposes of DATA Act reporting would not be completed until after May 2017. However, they said they have convened a working group to develop and vet a set of options to establish a government-wide definition for program that is meaningful across multiple communities and contexts (such as budget, contracting, and grants). Status update 1. federal program inventory requirements and to make the inventories more useful, the Director of OMB should better present a more coherent picture of all federal programs: revise relevant guidance to direct agencies to collaborate with each other in defining and identifying programs that contribute to common outcomes; revise relevant guidance to provide a time frame for what constitutes “persistent over time” that agencies can use as a decision rule for whether to include short-term efforts as programs; and define plans for when additional agencies will be required to develop program inventories. In June 2016, OMB staff stated that they have not taken any actions in response to our recommendations related to the federal program inventory, as they continue to determine how best to implement inventory requirements in coordination with those of the DATA Act. In our July 2015 testimony on DATA Act implementation, we recommended that OMB accelerate efforts to determine how to best merge DATA Act purposes and requirements with the GPRAMA requirement to produce a federal program inventory. However, at the same hearing, the Acting Deputy Director for Management and Controller at OMB stated that he did not expect an update of the program inventories to happen before May 2017. 2. To ensure the effective implementation of 2. As of June 2016, OMB had not taken federal program inventory requirements and to make the inventories more useful, the Director of OMB should, to better present a more coherent picture of all federal programs, include tax expenditures in the federal program inventory effort by designating tax expenditure as a program type in relevant guidance; and developing, in coordination with the Secretary of the Treasury, a tax expenditure inventory that identifies each tax expenditure and provides a description of how the tax expenditure is defined, its purpose, and related performance and budget information. action to include tax expenditures in the federal program inventory. GPRAMA requires OMB to publish a list of all federal programs on a central, government-wide website. The federal program inventory is the primary tool for agencies to identify programs that contribute to their goals, according to OMB’s guidance. By including tax expenditures in the inventory, OMB could help ensure that agencies are properly identifying the contributions of tax expenditures to the achievement of their goals. Although OMB published an initial inventory covering the programs of 24 federal agencies in May 2013, OMB decided to postpone further development of the inventory in order to coordinate with the implementation of the DATA Act. In our July 2015 testimony, we recommended that OMB accelerate efforts to merge DATA Act purposes with the production of a federal program inventory. Status update 3. In June 2016, OMB staff stated that they have not taken any actions in response to our recommendations related to the federal program inventory, as they continue to determine how best to implement inventory requirements in coordination with those of the DATA Act. In our July 2015 testimony on DATA Act implementation, we recommended that OMB accelerate efforts to determine how best to merge DATA Act purposes and requirements with the GPRAMA requirement to produce a federal program inventory. However, at the same hearing, the Acting Deputy Director for Management and Controller at OMB stated that he did not expect an update of the program inventories to happen before May 2017 because the staff that would work on the program inventories were heavily involved in DATA Act implementation. revise relevant guidance to direct agencies to consult with relevant congressional committees and stakeholders on their program definition approach and identified programs when developing or updating their inventories; revise relevant guidance to direct agencies to identify in their inventories the performance goal(s) to which each program contributes; and ensure, during OMB reviews of inventories, that agencies consistently identify, as applicable, the strategic goals, strategic objectives, APGs, and CAP goals each program supports. Status update 1. OMB has taken some steps to address could use the information on the Performance.gov website to accomplish specific tasks and specify the design changes that would be required to facilitate that use. this recommendation, but additional actions are needed. The General Services Administration, on behalf of OMB, issued a usability test report on Performance.gov in September 2013. The test found that (1) sections of the website were not accessible; (2) users were unclear about the purpose of Performance.gov, its intended audiences, and what users can do on the website; (3) users had difficultly locating graphics and understanding if agencies had met their goals; and (4) the search functionality produced poor results and the search terms were not highlighted. This usability test produced several recommendations based on these findings. According to OMB and PIC staff in August 2015 and May 2016, they have taken some actions to address the recommendations. For example, staff addressed the accessibility issue and partly addressed the search issue. However, OMB has not yet addressed the other two recommendations. Further, OMB and the PIC have not clarified the ways that intended audiences can use the information on the website to accomplish specific tasks, or specified design changes that would be required to facilitate that use, as we described in our report. We will continue to monitor progress. Status update 2. OMB, GSA and the PIC systematically information on the needs of a broader audience, including through the use of customer satisfaction surveys and other approaches recommended by federal guidance. collect information on the needs of its users by implementing a website survey, a feedback form on Performance.gov, and a working group focused on improving the PREP system. Staff have set up a backlog system to prioritize, store, and address user feedback. According to PIC staff, feedback is prioritized based on several factors. These factors include the value it would bring to a larger audience, the cost and estimated time to implement, and the risk and opportunity cost of addressing the feedback. The highest priority items on the product backlog system are placed on the monthly prioritized list for the contractor to begin work on. However, OMB and the PIC have not identified, engaged, or collected information on the needs of a broader audience, such as interested members of the public, and how those needs might be addressed through the website. 3. seek to ensure that all performance, 3. As of May 2016, OMB and the PIC were search, and customer satisfaction metrics, consistent with leading practices outlined in federal guidance, are tracked for the website, and, where appropriate, create goals for those metrics to help identify and prioritize potential improvements to Performance.gov. monitoring 18 of the 24 recommended performance measures. PIC staff said that they now track the performance measures through the Digital Analytics Program, which does not track the remaining six measures. OMB and PIC staff have not created goals for the measures they track to help identify and prioritize potential improvements to Performance.gov. We will continue to monitor progress. 1. To ensure that the PIC has a clear plan 1. The PIC developed a strategic plan for for accomplishing its goals and evaluating its progress, the Director of OMB should work with the PIC to update its strategic plan and review the PIC’s goals, measures, and strategies for achieving performance, and revise them if appropriate. 2015, which identified its mission, goals and strategies, and core responsibilities for achieving them. PIC staff reported that they plan to update the document for 2016 with a more robust plan. Status update 1. According to information provided by guidance are made, the Director of OMB should work with the PIC to test and implement these provisions. OMB and PIC staff in June 2015, although OMB revised its guidance as we recommended, it did not work with the PIC to test implementation of these provisions. Instead, they told us that both PIC and OMB staff ensure agencies are implementing these provisions of their guidance when reviewing agencies’ quarterly update submissions for APGs. However, our analysis of agencies’ updates in July 2014 found implementation of these provisions continues to be mixed. We will continue to monitor progress. Performance.gov to include additional information about APGs, the Director of OMB should ensure that agencies adhere to OMB’s guidance for website updates by providing complete information about the organizations, program activities, regulations, tax expenditures, policies, and other activities—both within and external to the agency—that contribute to each APG. ensure that agencies provide complete information about tax expenditures contributing to their APGs. According to information provided by OMB staff in April 2015, agencies were asked to identify organizations, program activities, regulations, policies, tax expenditures, and other activities contributing to their 2014-2015 APGs. This process began as part of the September 2014 update to Performance.gov, with opportunities for revisions in subsequent quarterly updates. We found that while agencies had made progress in identifying external organizations and programs for their APGs, they did not present this information consistently on Performance.gov. Although each APG web page has a location where agencies are to identify contributing programs, agencies did not always identify external organizations and programs there. Instead, they identified these external contributors elsewhere, such as APG overview or strategy sections. In June 2015, OMB staff said they would work with agency officials to help ensure information is presented in the appropriate area of Performance.gov in future updates. Performance.gov to include additional information about APGs, the Director of OMB should ensure that agencies adhere to OMB’s guidance for website updates by providing a description of how input from congressional consultations was incorporated into each APG. Performance.gov for the 2014-2015 APGs generally found that either agencies did not include this information or they had not updated it to reflect the most recent round of stakeholder engagement. In June 2015, OMB staff reported that they will focus agency attention on this issue during the development of the 2016-2017 APGs, to be published in October 2015. We will continue to monitor progress. This report is part of our response to a mandate to assess the implementation of the GPRA Modernization Act of 2010 (GPRAMA). This report assesses the Office of Management and Budget’s (OMB) (1) efforts to ensure the usefulness of Performance.gov, and (2) strategic plan for Performance.gov. To address our objectives, we reviewed the 22 Digitalgov.gov requirements for federal websites and digital services, and selected 9 for our assessment of Performance.gov, as shown in table 4. The selected requirements are those most associated with customer feedback and outreach, usability, performance measures, and records management. We assessed OMB’s efforts (in collaboration with the Performance Improvement Council (PIC) and the General Services Administration (GSA)) to ensure the usefulness of Performance.gov and OMB’s strategic plan by comparing steps taken and documentation for each to the selected requirements. To further address our first objective, we examined documentation on how OMB was (1) seeking information from various audiences about their needs concerning Performance.gov, (2) ensuring the website was clarifying ways audiences can use Performance.gov, and (3) tracking a broader range of performance and customer satisfaction measures and setting goals for those measures. We used the information to follow up on the status of the recommendations in our 2013 report. Further, we collected documentation on customer service feedback, including survey data collected from the website survey and customer service feedback logs, and analyzed the results. We reviewed the findings of the September 2013 usability test GSA conducted. We collected documentation on the performance measures tracked for Performance.gov and compared it with the recommended measures on Digitialgov.gov. We reviewed requirements for agency performance plans established under the Government Performance and Results Act of 1993 and enhanced by GPRAMA, and used them as a source for leading practices on setting goals for collected performance measures and customer service feedback. We reviewed requirements outlined in GPRAMA for Performance.gov, including public reporting requirements for agency priority goals (APGs), cross-agency priority (CAP) goals, and the federal program inventory. We reviewed other related guidance, such as OMB Circular No. A-11, Preparation, Submission, and Execution of the Budget. We reviewed several of our prior related reports and summarized the findings and recommendations related to OMB’s implementation of the selected GPRAMA requirements. We selected these reports because they represent our most current reports on the implementation of these selected GPRAMA requirements. We also interviewed staff from OMB’s Office of Performance and Personnel Management, the PIC, and GSA to determine the actions taken in response to our recommendations about clarifying ways intended audiences can use Performance.gov, systematically collecting feedback from a broader audience, and tracking recommended performance measures. Further, we communicated with OMB and PIC staff to determine the actions taken thus far to address our prior recommendations in relation to APGs, CAP goals, and the federal program inventory. To further address our second objective, we requested documentation on OMB’s strategic plan for Performance.gov, social media and customer outreach strategy, and web records plan. We found the agency has not developed these documents. We also interviewed staff from OMB, the PIC, and GSA on the Performance.gov website’s strategic plan, social media and customer outreach strategy, web records plan, and accessibility on mobile devices. We compared any steps taken on those actions to Digitalgov.gov requirements. We reviewed the Digital Services Playbook, which outlines key successful practices from the private sector that would help the government build effective digital resources. The playbook provides guidance on interacting with users through different channels. It was used to analyze the extent to which Performance.gov is accessible on mobile devices. We reviewed Performance.gov to determine if it was accessible on mobile devices by visiting each main tab of the website on multiple devices. We also identified NARA guidance on web records management. This guidance assists agencies in how to properly manage and schedule web records. We reviewed the Research-Based Web Design and Usability Guidelines for guidelines about informing website users of changes to the website. The Research-Based Web Design and Usability Guidelines provide guidance on a broad range of web design and communication issues. We also reviewed a selection of APG pages on Performance.gov to document whether the page had an indication of the last time it was updated. We concentrated on evaluating the 22 agencies listed on Performance.gov, which are mostly department level agencies. The additional 31 agencies listed on Performance.gov do not have dedicated pages on the website, and therefore were not evaluated as part of this analysis. For the 22 selected agencies, we focused on the APG’s for fiscal years 2014 to 2015 because the number of APGs for fiscal years 2016 to 2017 was not yet finalized at the time of the analysis. We randomly selected one APG from each of the 22 agency pages for review. For each selected APG, we reviewed all of the tabs to determine if there were any date or time stamps to indicate the last time the page was updated. We also reviewed in more detail the Progress Update and Indicators pages to evaluate whether users could determine through page context when the information or data on a given page was last updated. We conducted this performance audit from July 2015 to August 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Lisa Pearson, Assistant Director; Sonya Phillips, Analyst-in-Charge, supervised the development of this report. Caroline Prado, Robyn Trotter, and Edith Yuh made significant contributions to all aspects of this report. Karin Fangman, Robert Gebhart, Kirsten Lauber, and Donna Miller provided additional assistance. | Congress took steps to improve federal performance reporting through GPRAMA by requiring that OMB provide performance information via a publicly available central website, Performance.gov. In June 2013, GAO reported on the initial development of Performance.gov. GPRAMA includes a provision for GAO to periodically review its implementation. This report assesses OMB's (1) effort to ensure Performance.gov's usefulness, and (2) strategic plan for the website. GAO compared elements of Performance.gov to GSA's Digitalgov.gov requirements for federal websites. GAO summarized prior work on OMB's implementation of selected GPRAMA requirements. GAO also interviewed OMB, PIC, and GSA staff about recommendations GAO made on developing the website and Performance.gov's strategic plan. The Office of Management and Budget (OMB), General Services Administration (GSA), and the Performance Improvement Council (PIC) took several steps to improve the usefulness of Performance.gov, a website intended to serve as the public window to the federal government's goals and performance. However, their actions do not fully meet selected Digitalgov.gov requirements for federal websites (which are based on relevant statutes, regulations, and executive orders) and do not fully meet provisions of the GPRA Modernization Act of 2010 (GPRAMA): In accordance with Digitalgov.gov, GSA, on behalf of OMB, issued a usability test in September 2013. The test identified issues with the website's accessibility, purpose, data visualizations, and search function. However, OMB and GSA have not addressed all of the test's findings. OMB and the PIC are tracking 18 of 24 website performance measures required by Digitalgov.gov, but have not set goals for those measures. In June 2013, GAO recommended they track measures and set goals for those measures. However, those recommendations remain open. OMB has not met all of the GPRAMA public reporting requirements for Performance.gov. In particular, GAO identified several areas where OMB is not fully meeting agency priority and cross-agency priority goal public reporting requirements. OMB and PIC staff told GAO they are aware that Performance.gov is not fully GPRAMA compliant, but in moving forward, are focused on ensuring its compliance. According to OMB and PIC staff, limited resources have prevented them from taking actions to address the 2013 usability test and setting goals for measures. By not fully implementing Digitalgov.gov requirements and GAO's recommendations on GPRAMA requirements, Performance.gov will continue to have difficulty serving its intended purpose as a central website where users can easily locate government-wide performance information. OMB does not have a strategic plan for the website that will help guide staff in the future. Specifically, OMB does not have a customer outreach strategy that incorporates, as appropriate, information about how OMB intends to (1) inform users of changes on Performance.gov, (2) use social media as a method of communication, and (3) use mobile devices and applications. OMB also lacks an archiving plan to retain data and content on Performance.gov. Agency-wide strategic planning practices required under law can serve as leading practices for planning at lower levels within federal agencies, such as individual programs or initiatives. Consistent with these practices, strategic plans should contain goals and objectives, approaches, and resources. OMB staff said they had not developed a strategic plan for Performance.gov because they wanted to allow transition time for the operations and website maintenance contractor hired in August 2015. OMB staff also said that, in February 2016, they hired a Digital Services Director to develop a strategic plan and manage the website's long-term development. Without a strategic plan, OMB will not know the resources it needs or the steps to take to meet requirements, and to ensure the site provides useful information to the public. GAO is making three recommendations that OMB work with GSA and the PIC to 1) ensure the information presented on Performance.gov consistently complies with GPRAMA public reporting requirements for the website's content; 2) analyze and, where appropriate, implement usability test results to improve Performance.gov; and 3) develop a strategic plan for the future of Performance.gov that includes goals, objectives, and resources needed to meet website requirements; a customer outreach plan; and a strategy to manage and archive data. OMB staff agreed with GAO's recommendations and provided technical clarifications, which GAO incorporated as appropriate. |
North Korea has five nuclear facilities that, collectively, have the potential to produce nuclear material for creating nuclear weapons. The installations are (1) a graphite-moderated, 5-megawatt electric (MW(e)) power reactor, (2) a plutonium-reprocessing facility, (3) a fuel rod fabrication facility, and (4) two unfinished graphite-moderated reactors—a 50-MW(e) reactor and a 200-MW(e) reactor—that were under construction before the Agreed Framework was signed. Most of the facilities are located in Yongbyon, 60 miles north of Pyongyang. The Treaty on the Non-Proliferation of Nuclear Weapons prohibits nonnuclear weapons states from acquiring nuclear weapons. North Korea became a party to the treaty in 1985 and, in 1992, concluded an agreement with IAEA for safeguarding its nuclear material. The agreement with IAEA—the United Nations-affiliated organization responsible for implementing safeguards requirements under the treaty—requires North Korea to declare all of its nuclear material and to allow IAEA to perform inspections and other safeguards measures at North Korea’s nuclear facilities. The purpose of these measures is to ensure that nuclear material is not diverted to nuclear weapons. IAEA uses several measures to ensure compliance with its safeguards agreements. Material-accounting measures verify the quantity of nuclear material declared to IAEA and any changes in the quantity over time. Containment measures utilize physical barriers, such as walls and seals, to control the access to and movement of nuclear material, while surveillance and other monitoring devices detect the movements of nuclear materials and any tampering with IAEA’s containment measures. Finally, IAEA uses on-site inspections, among other things, to help ensure that all of the material has been declared and placed under IAEA’s control. According to unverified public accounts, North Korea reported to IAEA in May 1992 that it had about 90 grams of plutonium subject to IAEA’s safeguards from a one-time reprocessing of defective fuel rods. Shortly thereafter, IAEA began implementing safeguards, including inspections, to verify the accuracy and completeness of North Korea’s declaration of the amount of nuclear material in its possession. The inspections identified discrepancies suggesting that North Korea had not declared all of its nuclear material. For example, contrary to North Korea’s claim that it conducted a one-time reprocessing of damaged fuel rods, IAEA concluded that North Korea had reprocessed fuel on several occasions since 1989. North Korea refused to allow IAEA to resolve the discrepancies, limited IAEA’s inspections, refused the implementation of IAEA’s “special inspections” at two sites, and announced its intention to withdraw from the Treaty on the Non-Proliferation of Nuclear Weapons. These and other perceived provocations led to concerns about the possibility of war if North Korea continued to pursue its existing graphite-moderated nuclear program. The Agreed Framework defused tensions on the Korean Peninsula and resulted in various trade-offs between North Korea and the United States. For its part, North Korea committed, among other things, to (1) remain a party to the treaty; (2) freeze the operation and construction of its graphite-moderated reactors and related facilities and, later, to dismantle them; (3) safely store and, at a later time, transfer its spent fuel from North Korea for ultimate disposal; and (4) resolve IAEA’s questions about the accuracy and completeness of North Korea’s 1992 nuclear declaration. In return, the United States agreed, among other things, to create an international consortium of member countries to replace North Korea’s graphite-moderated reactors with two light-water reactors by a target date of 2003. The resulting consortium—established in March 1995—is called the Korean Peninsula Energy Development Organization (KEDO). In August 1997, groundbreaking for the reactors occurred in the Kumho district of Sinpo—a port city on North Korea’s east coast. (See fig. 1 for a map identifying Sinpo and other relevant North Korean sites.) While groundbreaking has occurred, no formal delivery schedule has been established for the reactor project. In the meantime, the Agreed Framework envisions specific functions for IAEA, notably that IAEA (1) monitor the freeze at five North Korean nuclear facilities and (2) resume inspections at other nuclear facilities not subject to the freeze. The Agreed Framework also calls upon North Korea to take “all steps that may be deemed necessary” for IAEA to verify the accuracy and completeness of North Korea’s 1992 report on nuclear facilities and the amount of nuclear material in its possession. IAEA began monitoring the five North Korean nuclear facilities subject to the freeze in late November 1994—about 1 month after the Agreed Framework was concluded. While IAEA’s monitoring activities provide assurance that operations and construction at these facilities have ceased, several monitoring problems affect IAEA’s ability to ensure that North Korea is complying fully with certain aspects of the nuclear freeze. For example, although activities affecting North Korea’s reprocessing facility are prohibited, North Korea has not allowed IAEA to implement required safeguards measures on the liquid nuclear waste tanks at the facility. According to IAEA, the measures are needed to ensure that the nuclear waste is not being removed or altered. This is particularly important because removing or altering the nuclear waste could damage critical evidence about the history of North Korea’s nuclear program. The Agreed Framework specifies that the freeze on North Korea’s graphite-moderated reactors and related facilities was to be fully implemented within 1 month of the agreement’s signing on October 21, 1994. According to IAEA’s documents and the “supply agreement”—a document that sets forth the conditions for the delivery of the light-water reactors to North Korea under the Agreed Framework—the freeze prohibits North Korea from (1) operating the 5-MW(e) reactor, the fuel rod fabrication plant, and the reprocessing plant and (2) continuing or beginning construction work on such existing facilities as the unfinished 50-MW(e) and 200-MW(e) reactors or on related facilities. Furthermore, the spent fuel from the 5-MW(e) reactor must be stored and disposed of in a manner that does not involve the fuel’s reprocessing in North Korea. The Agreed Framework specifies that North Korea must provide “full cooperation” to IAEA in its monitoring of the freeze. IAEA is not a signatory to the Agreed Framework. However, in November 1994, the United Nations Security Council requested that IAEA take all steps deemed necessary to monitor the freeze. IAEA’s Board of Governors approved the request. Shortly thereafter, IAEA and North Korea began negotiating arrangements for IAEA’s monitoring of the freeze, which among other things, resulted in the identification of critical buildings at each facility site that would be subject to IAEA’s monitoring measures. According to officials from the State Department and the Arms Control and Disarmament Agency (ACDA), IAEA and North Korea also reached an understanding on the definition of the freeze that, according to IAEA’s documentation, provides that any movements of nuclear material or equipment within the facilities under the freeze, any maintenance work by the operator, and any transfers of nuclear materials out of the facilities must be carried out under IAEA’s observation or under other IAEA arrangements. Finally, any nuclear equipment and components related to the freeze, including items manufactured for the two reactors under construction, must be monitored by IAEA. IAEA inspectors visited the North Korean facilities subject to the freeze from November 23 to November 28, 1994, and confirmed that the three operating facilities had been shut down and that construction on the two incomplete reactors had stopped. IAEA maintains a continuous presence in North Korea to monitor the facilities and to ensure that they remain under the freeze. According to IAEA, its monitoring activities provide assurance that operations and construction at the five facilities are frozen. IAEA inspectors regularly monitor the 5-MW(e) reactor, the fuel fabrication plant, and the reprocessing plant. IAEA uses all technical means available to monitor the freeze at these facilities, such as using seals that can indicate instances of tampering, using video cameras, and making short-notice inspections. The particular method(s) used depends on the circumstances at each of the three facilities. The primary monitoring method is the use and frequent verification of tamper-indicating seals on equipment and installations throughout the “frozen” nuclear facilities. Video cameras are also used for surveillance. Finally, short-notice inspections are used to monitor certain equipment and areas in the frozen facilities that have not been allowed to be sealed. IAEA inspectors also monitor activities related to the canning and storage of spent fuel from the 5-MW(e) reactor and have, through qualitative measurements of the fuel rods (spent fuel), verified whether the rods are, in fact, irradiated (spent) fuel rods. IAEA also monitors activities at the two unfinished reactors. As with the three other nuclear facilities under the freeze, IAEA established an initial photographic baseline to document the status of each facility’s construction. Since then, IAEA inspectors have visited the 50-MW(e) graphite-moderated nuclear reactor in Yongbyon and the 200-MW(e) graphite-moderated nuclear reactor in Taechon a few times a year. During their visits, the inspectors observe the facilities, take updated pictures, and compare the photos to ensure that construction has not resumed at the facilities. While IAEA is confident that operations and construction at the five nuclear facilities have ceased, IAEA identified several problems affecting its ability to determine if North Korea is complying fully with certain aspects of the nuclear freeze. First, despite repeated requests, North Korea has not provided IAEA with adequate information about the amount and location of nuclear equipment and components that it may have produced for its two unfinished reactors. As previously discussed, the nuclear equipment and components for the facilities under the freeze, such as the graphite blocks manufactured for the 50-MW(e) and the 200-MW(e) reactors under construction, are subject to monitoring by IAEA. According to congressional testimony by the former Secretary of Defense in January 1995, North Korea’s 50-MW(e) reactor was expected to be completed in 1995. Because of this schedule, all of the reactor’s equipment and components, including the reactor’s graphite blocks and fuel-handling machines, should have been available for inclusion in the reactor’s building. Instead, North Korea informed IAEA that it had manufactured only about 50 percent of the graphite blocks needed for the 50-MW(e) reactor and none of the graphite blocks needed for the 200-MW(e) reactor, which, according to the former Secretary of Defense, was expected to be completed in 1996. According to IAEA, North Korea explained that there was no reason for it to continue manufacturing equipment and components for the two reactors after July 1993, since it had begun discussions with the United States about replacing the graphite-moderated reactors with light-water reactors. However, North Korea’s explanation is insufficient for IAEA to rule out whether any additional nuclear equipment and components exist. The second problem involves the “mixer settlers,” which are part of a system that separates plutonium from uranium and fission products in the reprocessing facility. According to IAEA, North Korea informed IAEA that the mixers need to be maintained frequently to ensure that they operate in case the Agreed Framework collapses and North Korea chooses to resume its nuclear program. As a result, IAEA has not precluded access to the area or equipment and has allowed North Korea to operate the mixers for a brief time each month for maintenance purposes under inspectors’ observation. While IAEA periodically performs short-notice inspections of the mixer area, IAEA cannot be sure that North Korea is operating the mixers within the permissible limit. IAEA wants to monitor the mixers on a continuous basis and, in January 1996, secured North Korea’s agreement to demonstrate relevant equipment containing sensors that would detect instances when the mixers are operating. The sensors would permit IAEA to determine whether North Korea’s use of the mixers is compatible with both (1) the equipment’s maintenance needs and (2) North Korea’s commitment to operate the mixers for only a brief interval each month. IAEA expects to install the equipment in North Korea in mid-1998. Third, IAEA has not been allowed to implement safeguards measures on the liquid nuclear waste tanks with instruments that would ensure that North Korea is not removing or altering the composition of the waste at the reprocessing facility. This issue is particularly important because, in addition to the monitoring issue, the tanks hold critical evidence about the history of North Korea’s nuclear program. IAEA has asked North Korea for permission to install instruments for monitoring the volume and composition (level and density) of the liquid in the nuclear waste tanks.According to an IAEA official, these safeguards measures are needed because seals and surveillance do not provide the required assurance—the tanks are connected to a complex and inaccessible piping system that, if operating, would permit the waste to be removed and/or altered. While North Korea maintains that the system’s valves are closed, IAEA is concerned that the valves’ status is not verifiable and that North Korea could be using these or other valves and pumps to tamper with the tanks’ contents. Such an activity would change the composition of the waste (i.e., alter its nuclear “fingerprint” and affect its subsequent analysis), thus violating the terms of the Agreed Framework. However, according to IAEA, whether this is occurring will not be known until North Korea agrees to allow the monitoring instruments to be installed. Thus far, North Korea has denied IAEA’s repeated requests to install the instruments. North Korea has also denied IAEA’s request to take environmental “swipe” samples at the facility and other types of analytical samples. Last, although IAEA has access to all critical nuclear facilities under the freeze, it has experienced difficulties in gaining regular access to some technical buildings at the frozen facilities. IAEA—as a matter of normal practice in carrying out its safeguards inspections—wants to obtain periodic access to all the technical buildings to ensure that they are not being used for unauthorized purposes. According to State Department and ACDA officials, negotiations between IAEA and North Korea in late 1994 and early 1995 resulted in, among other things, the identification of technical buildings at a facility site that would be subject to IAEA’s monitoring measures under the freeze. According to IAEA, the procedures agreed to with North Korea envision IAEA’s periodic visits to technical support buildings for which North Korea has stated that the buildings’ scope of operations has changed. IAEA stated that, without such visits, the monitoring of the freeze would be limited to only certain buildings where IAEA’s safeguards measures, including inspections, are already applied. However, according to IAEA, North Korea now says that while it agreed to freeze technical buildings that are directly related to its nuclear program, it did not agree to freeze those that are indirectly related to the program. Therefore, North Korea will consider IAEA’s access for visits by inspectors only on a case-by-case basis. According to IAEA, the issue remains unresolved. According to information supplied by the State Department, the United States is “deeply concerned” about the “continued lack” of North Korean cooperation with IAEA’s freeze-monitoring activities, including IAEA’s efforts to monitor the liquid nuclear waste tanks and to help ensure that other information about North Korea’s nuclear program is preserved. State Department guidance in 1996, for example, instructed U.S. delegates to IAEA to remind North Korea that the Agreed Framework requires North Korea to cooperate fully with IAEA in monitoring the freeze and that, “by definition,” full cooperation includes the preservation of information and data that might speak to the history of North Korea’s nuclear program. While the Agreed Framework requires North Korea to cooperate fully with IAEA’s freeze-monitoring activities, neither the Agreed Framework nor its subsequent implementing agreements, including the agreement for supplying the reactors, “define” or otherwise discuss North Korea’s cooperation with IAEA on activities related to the preservation of information during the monitoring phase. Officials from the State Department and ACDA concurred with our understanding of the agreement. They explained that the guidance reflected an understanding of what is “implicit” in the Agreed Framework—namely, that the preservation of information will be essential in demonstrating North Korea’s compliance with its safeguards agreement with IAEA. According to IAEA, the freeze-monitoring issues are unresolved, in part, because of a fundamental difference in view between IAEA and North Korea. Specifically, IAEA contends that its safeguards agreement with North Korea is valid and in effect and, as a result, that IAEA’s activities in North Korea—including its monitoring activities—arise from its safeguards agreement with North Korea. According to IAEA, North Korea disagrees and says that its safeguards agreement with IAEA is currently invalid for the facilities subject to the freeze and, consequently, that its acceptance of IAEA’s freeze-monitoring measures derives solely from the Agreed Framework. Furthermore, according to IAEA, North Korea asserts that it is cooperating fully with IAEA’s freeze-monitoring measures because such outstanding issues as IAEA’s monitoring of the liquid nuclear waste tanks relate to the verification of North Korea’s 1992 nuclear declaration and, as a result, need not be resolved until much later in the Agreed Framework’s implementation. (Activities related to IAEA’s verification of North Korea’s nuclear declaration and the timing of these activities are discussed in more detail later in this report.) As part of the Agreed Framework, North Korea agreed to allow IAEA to resume certain types of facility inspections upon the conclusion of the agreement for supplying the two light-water reactors. IAEA had been inspecting North Korea’s declared nuclear facilities in the years preceding the Agreed Framework. However, North Korea terminated the inspections after IAEA uncovered evidence suggesting that North Korea had not declared all of its nuclear material. Under the terms of the Agreed Framework, IAEA’s inspections (as opposed to monitoring activities) need resume only at the facilities not subject to the freeze. The applicable facilities are (1) an experimental 8-megawatt thermal reactor (MW(t))—a research reactor for isotope production and research in Yongbyon, (2) a nuclear fuel rod storage facility in Yongbyon, (3) about 30 locations scattered throughout North Korea that have small quantities of nuclear material, and (4) two other facilities that were identified as a “critical assembly” for isotope production in Yongbyon and a “subcritical assembly” in Pyongyang. These facilities are smaller and generally less significant to North Korea’s nuclear program than the facilities under the freeze. Following the conclusion of the light-water reactor supply agreement in December 1995, IAEA continued inspections at facilities not subject to the freeze in March 1996 and resumed inspections at the locations with small quantities of nuclear material scattered throughout North Korea. IAEA inspects most of the facilities a few times a year. The fuel rod storage facility is inspected more frequently because the facility is of greater importance to North Korea’s nuclear program. Finally, through the end of February 1998, IAEA had also inspected a number of the approximately 30 locations scattered throughout North Korea. State Department and ACDA officials described North Korea’s agreement to resume IAEA’s inspections at these facilities as a “significant symbolic move” because it represents the beginning of North Korea’s gradual return to the international safeguards system. According to IAEA, North Korea is cooperating with IAEA’s inspection activities at facilities not subject to the freeze with some limitations. Specifically, North Korea has permitted IAEA to take measurements of nuclear material at these facilities and has provided reports on the amount of nuclear material at them for IAEA’s examination. But North Korea has not allowed IAEA to take environmental “swipe” samples, which is a routine safeguards measure applied at similar facilities throughout the world. IAEA will need to perform a wide variety of complex and time-consuming activities to verify the accuracy and completeness of North Korea’s 1992 declaration of the amount of nuclear material in North Korea’s possession. Given the time frames established in the Agreed Framework and the absence of an agreed-upon reactor construction schedule, these activities could suffer delays. Since 1995, IAEA has repeatedly stressed that unless it and North Korea reach an early agreement on obtaining the information needed for verifying North Korea’s declaration and on the measures required to preserve such information, any future possibility of verifying North Korea’s nuclear declaration “might be lost.” Thus far, North Korea has neither provided the information nor agreed to any of IAEA’s proposed interim measures for preserving it, raising concern within IAEA that the necessary information may not be available later. The Agreed Framework requires North Korea to resolve IAEA’s questions about the accuracy and completeness of its 1992 nuclear declaration and thereby come into full compliance with both its safeguards agreement and the Treaty on the Non-Proliferation of Nuclear Weapons. Under the terms of the Agreed Framework, this is to occur when a “significant portion” of the reactor project has been completed but before the delivery of the key nuclear components. According to the agreement for supplying the reactors, a significant portion of the reactors will be completed when the first reactor’s containment structure, turbine, and other auxiliary buildings have been completed—but before the reactor’s major systems are introduced into the structure. The Agreed Framework does not specify definitive milestones for constructing the reactors because, according to the principal U.S. negotiator for the agreement, the United States did not want to commit itself to a specific schedule for delivering the reactors. However, shortly after the Agreed Framework was concluded, U.S. government officials estimated that IAEA’s verification of the accuracy and completeness of North Korea’s nuclear declaration would begin in about 1999—4 years before the reactors’ projected delivery date in the agreement. Although site preparation work has begun, the full reactor delivery schedule will not be known until the conclusion of a contract between KEDO and the Korea Electric Power Corporation—the prime contractor for the reactor project—and a “delivery schedule protocol” soon to be negotiated between KEDO and North Korea. Consequently, the time when IAEA’s verification activities may actually begin is uncertain. The delay in IAEA’s verification of North Korea’s nuclear declaration has been the subject of considerable disagreement. At the time when the Agreed Framework was signed, for example, opponents of the Agreed Framework argued that delaying IAEA’s verification activities created a disturbing precedent that not only undermined IAEA’s credibility and authority but also rewarded North Korea for its treaty transgressions. Critics also expressed concern that the Agreed Framework essentially allowed North Korea to renegotiate its treaty obligations so that—unlike other treaty members—North Korea need not provide information about its past activities for many years. In addition, critics expressed concern that North Korea may exploit the ambiguities in the Agreed Framework, including the absence of specific time frames for IAEA’s determination of the accuracy and completeness of North Korea’s nuclear declaration. However, according to official U.S. policy, as articulated by State Department and other U.S. government officials, the Agreed Framework did not undermine IAEA’s credibility or authority. Instead, they said that the Agreed Framework demonstrates the United States’ commitment to ensure that the issues identified by IAEA will be resolved. Furthermore, while U.S. government officials acknowledged that delaying IAEA’s verification of North Korea’s nuclear declaration was not preferable, they said that the trade-off was necessary because North Korea was intractable on this point during the negotiations on the Agreed Framework.According to them, the United States negotiated the best deal possible, given the circumstances at that time. They explained that North Korea’s commitments under the Agreed Framework go far beyond North Korea’s obligations under the treaty. Furthermore, they said that delaying IAEA’s verification of North Korea’s nuclear declaration did not compromise U.S. security interests because, according to them, the Agreed Framework ensures that the United States is not disadvantaged in any significant way if North Korea reneges on its commitments. For example, if North Korea backed out of the Agreed Framework in the early years, U.S. officials said that North Korea would have gained very little except modest amounts of heavy fuel oil and some technical assistance related to the safe storage of its spent fuel. Furthermore, if it reneges on its commitment to provide IAEA with information, North Korea will be left with only the “empty shells” of two light-water reactors. In the meantime, the officials said that the United States will have benefitted because North Korea’s nuclear program will have been frozen in the intervening years. IAEA will need to accomplish a wide variety of complex and time-consuming activities to verify the accuracy and completeness of North Korea’s nuclear declaration. For example, IAEA needs to determine the operating history of the 5-MW(e) reactor, as well as the amount of plutonium in the irradiated (spent) fuel rods from the reactor and the composition of the liquid nuclear waste at the reprocessing plant. IAEA will also need to inspect certain waste sites, including waste sites where undeclared nuclear materials are suspected to be present, which were the subject of an IAEA request for special inspections in 1983. These inspections will be time-consuming because one of the suspect sites has been completely camouflaged with dirt and landscaping. Furthermore, IAEA will need to establish whether North Korea has additional nuclear equipment and components for the two incomplete reactors and, if so, where the items are located. Finally, IAEA will need to translate, analyze, and authenticate the documentation on North Korea’s nuclear program and to investigate any leads that IAEA may obtain about the program. In September 1995, IAEA apprised North Korea of the information that IAEA must obtain to verify North Korea’s 1992 nuclear declaration and thereby determine whether North Korea is in compliance with its safeguards agreement and the Treaty on the Non-Proliferation of Nuclear Weapons. Since 1995, IAEA has repeatedly stressed that unless the parties reach an early agreement on obtaining information about North Korea’s nuclear program and on the measures required to preserve it, any future possibility of verifying North Korea’s nuclear statement “might be lost.” According to IAEA, this issue is one of the most significant problems that IAEA faces under the Agreed Framework. Nevertheless, in 1996, North Korea informed IAEA that it would not allow IAEA to begin its verification activities until a significant portion of the light-water reactor project is completed—an event whose timing depends on further negotiations. North Korea’s position is consistent with the time frames established in the Agreed Framework. In congressional hearings held shortly after the Agreed Framework’s conclusion, senior administration officials, including the former Secretary of State, stressed that delaying IAEA’s verification activities—while not preferable—did not jeopardize U.S. security interests. Unfortunately, North Korea has neither provided the information nor agreed to any of IAEA’s proposed interim measures for preserving it, and as a result, IAEA has reported that it has no assurance that the necessary information will be available later. According to IAEA officials, North Korea views IAEA’s preservation requirements as excessive and premature in relation to the time frames established in the Agreed Framework. Furthermore, according to the officials, North Korea says that since it intends to make the information available to IAEA when the time comes, it is cooperating with both IAEA and the terms of the Agreed Framework. A delay in determining the operating history of the 5-MW(e) reactor may be the most troublesome, complex, and costly preservation-related problem that IAEA faces under the Agreed Framework. As discussed, in the early 1990s, IAEA’s inspections identified discrepancies suggesting, in IAEA’s view, that North Korea had not declared all of its nuclear material. Specifically, contrary to North Korea’s claim that it conducted a one-time reprocessing of damaged fuel rods, IAEA concluded that North Korea had reprocessed fuel on several occasions since 1989. Determining whether or not North Korea has diverted fuel from the reactor’s core requires, among other things, measurements of (1) the total amount of plutonium in North Korea’s spent fuel and (2) certain fission products in the discharged fuel. According to Department of Energy (DOE) officials, the amount of plutonium can be determined whenever North Korea permits IAEA to measure the fuel. However, measurements of the fission products become increasingly difficult over time because of their short-lived nature. IAEA had envisioned taking these measurements when the spent fuel was transferred into canisters for storage. However, according to IAEA, North Korea refused because, in its view, it was premature to perform the measurements. According to IAEA, it lost valuable information about the reactor’s core in May 1994. This occurred because, while discharging the reactor, North Korea failed to accept IAEA’s proposals to select, segregate, and secure fuel rods for IAEA’s later measurement. Shortly thereafter, IAEA reported that the “situation resulting from the core discharge was irreversible and had seriously eroded” IAEA’s ability to ascertain whether North Korea had declared all of its plutonium. As early as September 1995, IAEA reported that delaying the measurements over the next several years would result in some (unspecified) “limitations in accuracy” and “significant additional cost” if the containers need to be opened. Furthermore, over time it will no longer be possible to determine the rods’ operating (irradiation) history using nondestructive methods. This is because the radioactive isotopes needed for the analysis are “dying out.” In addition, analyzing the aged and corroding fuel rods by “destructive” methods is far more complex and expensive than using nondestructive methods. IAEA is investigating a variety of methods that might be used to verify the accuracy and completeness of North Korea’s nuclear declaration. State Department and ACDA officials stressed, however, that the accuracy and timeliness of any of these methods will depend critically on (1) North Korea’s willingness to permit measurements and samples to be taken at relevant sites and (2) the amount of money that the United States and other interested parties are willing to spend to perform the work. According to information supplied by the State Department, the United States is “deeply concerned” about the absence of tangible North Korean steps to preserve information about the country’s past nuclear activities. A December 1996 State Department cable, for example, expressed deep concern about whether North Korea will fulfill this critical component of the Agreed Framework. The cable instructs U.S. delegates to IAEA to remind North Korea that if the Agreed Framework is to be fully realized, North Korea must take appropriate steps to resolve IAEA’s concerns in this area. Similarly, a March 1997 cable instructs U.S. delegates to remind North Korea that, although it will be several years before the key reactor components are expected to be delivered, North Korea must prepare now so that IAEA’s verification work can proceed smoothly and expeditiously.While the United States is concerned about the extent of North Korea’s cooperation thus far, State Department officials stressed that North Korea will not receive the key nuclear components until it has complied fully with its safeguards agreement. In November 1997, a senior State Department official told us that the United States and IAEA continually stress to North Korea the importance of the preservation issue. The official distinguished between the preservation issue and IAEA’s work in monitoring the freeze. Specifically, according to the official, while North Korea’s reluctance to cooperate on the preservation of historically relevant information poses a long-term problem for the project, in the short term, there is “no real problem” and “no alarming consequences” in monitoring the freeze. However, the official said that the United States has made clear to North Korea that its failure to preserve the needed information now may cause a work stoppage on the reactor project later. The Agreed Framework commits the United States to facilitate the delivery of two light-water reactors to North Korea by a target date of 2003. The specific timing of interim milestones—such as the completion of a significant portion of the first reactor’s construction—is, by design, ambiguous and highly dependent on the actions of parties involved in implementing the Agreed Framework. The Agreed Framework’s ambiguity about the timing of the project’s interim milestones and its linkages to reciprocal actions by North Korea, creates the basis for North Korea’s position that it is premature to resolve matters related to the preservation of vital information needed for IAEA’s verification work. When the Agreed Framework was signed, the United States estimated that IAEA’s verification work would begin in about 1999. Although site preparation work has begun for the reactor project, the reactor’s construction schedule awaits the negotiation of two important instruments—a contract between KEDO and its prime contractor and a “delivery schedule protocol” with North Korea. If the conclusion of these activities were delayed, then IAEA’s verification activities could be correspondingly delayed. Schedule delays increase the cost and difficulty of verifying North Korea’s nuclear declaration and lessen the likelihood that IAEA will be able to make a definitive assessment about the accuracy and completeness of North Korea’s nuclear declaration. Any protracted delays are likely to exacerbate these problems and could eventually result in the collapse of the Agreed Framework if IAEA cannot verify, with sufficient assurance, North Korea’s nuclear declaration. We provided the State Department, ACDA, DOE, and IAEA with a draft of this report for their review and comment. We met with State Department and ACDA officials, including representatives of the Department’s Bureaus of East Asian and Pacific Affairs, Political and Military Affairs, and Intelligence and Research, and representatives of ACDA’s Bureau of Nonproliferation and Regional Arms Control. While State Department and ACDA officials generally agreed with the report’s conclusions, they provided detailed comments to emphasize and clarify various points in the report. For example, the officials acknowledged that IAEA has experienced problems associated with its monitoring of North Korea’s nuclear freeze. However, they stressed that since North Korea’s nuclear program remains frozen, the monitoring problems are not central to the implementation of the Agreed Framework and therefore have not jeopardized the agreement. Instead, the primary problem has been securing North Korea’s cooperation in preserving information about its past nuclear activities. The State Department and ACDA agreed that verifying North Korea’s initial nuclear declaration will be a difficult task. IAEA and DOE officials, including the Director of DOE’s International Safeguards Division, also provided comments to improve the technical accuracy of the report. We incorporated the agencies’ comments, as appropriate. To obtain information for this report, we reviewed and analyzed the provisions of the Agreed Framework and subsequent implementing agreements, congressional hearings, the safeguards agreement between IAEA and North Korea, and IAEA reports and other documentation describing the scope and status of IAEA’s activities in North Korea. We also discussed IAEA’s activities under the Agreed Framework with cognizant officials from IAEA and ACDA as well as the Departments of State and Energy. Finally, we reviewed State Department cables made available to us through November 1997. The State Department denied our access to eight cables because they either “contain details of intelligence sources and methods as well as information provided by third countries” or dealt with a “highly sensitive” matter that, at the time, was under “active negotiation.” Furthermore, describing our request as “openended,” on March 12, 1998, the State Department denied our December 8, 1997, request for monthly updates on the cables. Given our past difficulties in obtaining North Korea’s views, we did not attempt to contact officials from North Korea. We conducted our review from July 1997 through March 1998 in accordance with generally accepted government auditing standards. As agreed with your office, we plan no further distribution of this report until 5 days after the date of this letter. At that time, we will send copies to the appropriate congressional committees, the Secretaries of State and Energy, the Director of ACDA, the Director General of IAEA, and other interested parties. Gene Aloise, Assistant Director Kathleen Turner, Evaluator-in-Charge Victor J. Sgobba, Senior Evaluator Duane G. Fitzgerald, Nuclear Engineer The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed issues related to the Agreed Framework between the United States and North Korea, focusing on the status of the International Atomic Energy Agency's (IAEA): (1) nuclear-freeze-monitoring activities; (2) inspections of facilities not subject to the freeze; and (3) plans to verify the accuracy and completeness of North Korea's 1992 declaration of the amount of nuclear material in its possession. GAO noted that: (1) the Agreed Framework requires North Korea to freeze operations and construction at five of its nuclear-related facilities and to permit IAEA to monitor the freeze; (2) in accordance with the arrangements under the Agreed Framework, IAEA began monitoring the freeze at the five facilities in late November 1994; (3) the five facilities, collectively, have the potential to produce nuclear material for creating nuclear weapons; (4) while IAEA is confident that operations and construction at these facilities have been frozen, IAEA identified several problems affecting its ability to determine whether North Korea is complying fully with other aspects of the nuclear freeze; (5) according to IAEA, safeguard measures on the liquid nuclear waste tanks at North Korea's reprocessing facility are needed to ensure that the nuclear waste is not being removed or altered; (6) North Korea says that it is cooperating fully with IAEA's freeze-monitoring measures; (7) the Agreed Framework allows North Korea to continue operating certain nuclear facilities not covered by the freeze; (8) IAEA resumed its inspections of these facilities in March 1996 and inspects most of them several times a year; (9) on the other hand, North Korea still refuses to accept activities, such as environmental sampling, at these facilities; (10) IAEA will need to perform a wide variety of complex and time-consuming activities to verify the accuracy and completeness of: (a) North Korea's initial declaration of nuclear facilities; and (b) the amount of nuclear material in its possession; (11) these activities are linked in the Agreed Framework to certain stages in a reactor's construction; if the reactor project suffers delays, IAEA's activities could be correspondingly delayed; (12) since 1995, IAEA has repeatedly stressed that unless IAEA and North Korea reach an early agreement on: (a) obtaining the information needed to verify the declaration; and (b) the measures required to preserve such information, any future possibility of verifying North Korea's nuclear declaration might be lost; and (13) North Korea has neither provided the information nor agreed to all of IAEA's proposed interim measures for preserving it because, in North Korea's view, IAEA's requirements are excessive and premature in relation to the timeframes established in the Agreed Framework. |
Prior to 2003, CMS was required by statute to select the two types of Medicare contractors it used at the time—fiscal intermediaries and carriers—from particular organization types. Congress limited the type of contractors CMS could use for claims administration activities when Medicare was enacted in 1965, in part because providers were concerned that the program would give the government too much control over health care. To increase providers’ acceptance of the new program and to assuage their concerns, Congress required that health insurers that already served as payers of health care services to physicians and hospitals become the Medicare claims administration contractors. Specifically, prior to 2003, CMS was required by law to select the first type of claims administration contractor, fiscal intermediaries—contractors that paid Part A and Part B claims for institutions such as hospitals—from among companies that were nominated by health care provider associations. Medicare law further required CMS to select the other type of contractor, carriers—contractors that paid the majority of Part B claims, such as for services provided by physicians and other providers—from among health insurers or similar companies. During this period, Medicare claims administration contracts were typically renewed every year, and CMS could not terminate the contracts unless the contractors were first provided with an opportunity for a public hearing. The contractors themselves could terminate their contracts and have their termination costs reimbursed by CMS irrespective of which party terminated the contract. In addition, the claims administration contractors were paid on the basis of their allowable costs, generally without financial incentives to encourage superior performance. The MMA requirement that CMS follow competitive procedures and the FAR in awarding contracts to MACs—except where MMA provisions explicitly differed—introduced key differences in how the agency would have to conduct its MAC contracting compared to how it had conducted its legacy contracting prior to 2003. Notably, under the FAR, agencies are generally required to conduct full and open competition for contracts and are permitted to contract with any qualified entity for any authorized purpose, with some exceptions; agencies are permitted to terminate contracts either for the government’s convenience or if they determine that the contractor is in default; and agencies are permitted to include financial incentives to contractors for meeting or exceeding performance goals. The MMA provided more specificity on certain aspects of CMS’s Medicare claims administration contracting processes. For example, CMS is to conduct a competition for MACs at least every 5 years, develop performance requirements and measurement standards for set forth the performance requirements in the MAC contracts, ensure the performance requirements and standards are used to evaluate MAC performance and are consistent with the MACs’ statement of work (SOW), and develop a measurement standard for provider and beneficiary satisfaction. CMS selected a contract type for the MACs that allows it to provide incentives tied to service and efficiency of operations. CMS opted to establish MAC contracts as cost-plus-award-fee contracts, a type of cost- reimbursement contract that allows an agency to provide financial incentives to contractors if they achieve performance goals. A MAC may earn an incentive, known as an award fee, based on performance, in addition to reimbursement for allowable costs and a base fee for the contract, which is fixed at the inception of the contract. The FAR provides additional guidance for performance-based contracts. If such a contract is used, the FAR requires that the agency establish methods that enable it to assess work performance against measurable performance standards. In addition, the FAR requires that the agency conduct performance evaluations and inform contractors about their performance and the areas in which improvement is expected. The FAR further states that agencies should prepare a quality assurance surveillance plan in conjunction with the SOW, which documents the agency’s approach to evaluating performance, and a review of the contractor’s quality control program. In the new contracting environment, MACs are responsible for most of the functions previously performed by the legacy contractors. They are responsible for processing and paying claims, handling the first level of appeal—redeterminations of denied claims, conducting medical review of claims, putting computerized edits into their portion of the claims- processing system to help ensure proper payment, serving as providers’ primary contact with Medicare by enrolling providers, conducting provider outreach and education, responding to provider inquiries, and auditing provider cost reports. In addition, MACs are responsible for coordinating with other CMS contractors that perform limited Medicare functions that serve beneficiaries and providers. For example, the 1-800-MEDICARE help line answers calls for general and claims-specific beneficiary inquiries and forwards a relatively small number of complex beneficiary inquiries to the MACs to respond. The MACs also are required to provide required reports and other documents, known as deliverables, to CMS within generally specified time frames. In its February 2005 report to Congress, Medicare Contracting Reform: A Blueprint for a Better Medicare, HHS outlined CMS’s plans for implementing contracting reform and highlighted anticipated improvements, including improved customer service, streamlined service delivery by integrating claims-processing functions, and savings from reducing program costs (see fig. 1). CMS designed the new MAC jurisdictions to achieve operational efficiencies by consolidating the number and types of contractors and better balancing workloads. In the legacy contracting environment, different contractors handled Part A and Part B claims in the majority of states, and multiple contractors were responsible for regions in which they processed claims across several—sometimes noncontiguous—states. In its 2005 report to Congress, CMS called the varying legacy contractors that processed Part A and Part B claims “a patchwork of responsibility and service,” a problem it hoped to solve with consolidation. Whereas in the legacy environment, a single state might have been served by multiple contractors handling Part A and B claims in their separate regions, in the MAC environment, CMS established MAC jurisdictions, which were based on contiguous state boundaries, such that a single A/B MAC handled all Part A and B claims—other than DME claims—in its jurisdiction. (See Part A and B claims—other than DME claims—in its jurisdiction. (See fig. 2.) fig. 2.) As of September 2009, CMS told us it had awarded and implemented 13 MAC contracts, worth at least $3 billion. Each contract is for 1 year (referred to as a base year), with up to 4 “option years,” should CMS choose to exercise them. During the MAC’s base year, the legacy contractors transitioned their workload over a period that generally lasted about 7 months for the 6 MACs in our study, ranging from 4 to 10 months. CMS instructed the MACs and legacy contractors to work together to transfer data and records and required the MACs to educate providers about the change. CMS also assigned responsibility to the MACs for consolidating computerized claims edits (used during processing to determine whether to accept, adjust, or reject a claim) that may have differed among the multiple legacy contractors into one consistent set of edits for each newly consolidated MAC jurisdiction. Furthermore, within their respective jurisdictions, A/B MACs were required to consolidate the legacy contractors’ policies that determine what services Medicare covers in a jurisdiction—called local coverage determinations—into one consistent set of policies within each jurisdiction. Two CMS components are principally responsible for Medicare contracting reform: the Office of Acquisition and Grants Management (OAGM) and the Center for Medicare Management (CMM). While the OAGM is responsible for awarding Medicare administrative contracts, divisions within the CMM are responsible for MAC program and operations management, development, and performance assessment, as well as for developing and executing both the Medicare contracting reform budget and the MAC operating budgets. (See app. IV for a CMS organizational chart of the components involved in Medicare contracting reform.) Other parts of CMS coordinate with these two components, such as the Office of Financial Management, which establishes many program requirements for MACs, including, but not limited to, financial reporting. To manage the complex transition and to conduct oversight of the MACs, CMS assembled a staff with experience in acquisitions, contract management, and program management, as well as technical advisors in areas such as information technology and claims processing. While CMS officials took numerous steps to facilitate Medicare contracting reform, we identified several CMS decisions that led to challenges. For example, we found that CMS underestimated the volume of appeals the MACs would inherit, which led to claims-payment delays and additional workload for incoming MACs. In some cases, CMS was able to make midcourse adjustments by incorporating lessons learned. CMS has taken steps to facilitate the complex process of implementing Medicare contracting reform, which we described as an inherently high- risk activity in our 2005 report. Medicare contracting reform represents the largest transition of claims administration workload since the inception of the Medicare program and was more complex than smaller- scale transitions CMS conducted in the past with legacy contractors. These earlier transitions were often “turnkey” operations, with incoming contractors retaining outgoing contractors’ staff and equipment. Furthermore, past transitions did not involve the transfer of as many Medicare Part A and B claims or the significant reconfiguration of the associated functional contractors and jurisdictions. This was also the first time that CMS awarded claims administration contracts under requirements for full and open competition. The agency faced the challenge of selecting contractors able to carry out complex activities critical to Medicare administration using procedures consistent with the FAR. In doing so, the agency decided to emphasize past experience and past performance with similar work of organizations that sought to become MACs. For the initial competition for 19 MAC contracts, only three of the organizations seeking contracts lacked Medicare experience. All awards as of September 2009 were made to organizations with previous Medicare experience. Recognizing its challenge to manage the complex transitions, CMS took steps to facilitate implementation activities in a number of areas, including developing an integrated implementation schedule, developing training, hiring support staff, documenting lessons learned, and making midcourse adjustments. In particular: CMS established a “cross-component” team to facilitate communication across the agency and developed an integrated implementation schedule. CMS reported that this team developed integrated schedule for the MAC implementation and other major Medicare initiatives, monitored implementation of cross-cutting initiatives, and identified effects that cut across initiatives. For example, CMS provided us with an integrated timeline the agency had developed that detailed important dates for each of these initiatives. This technical team was responsible for providing weekly updates to the directors of their respective components, who would then elevate issues as needed to CMS’s executive leadership. CMS developed training and manuals for agency staff, MACs, and legacy contractors. CMS held training classes to define the roles and responsibilities of contract administration staff, such as contracting officers and project officers, involved in the award and management of MAC contracts. The agency published a contract administration manual for staff with guidance on processing MAC deliverables and cost reports. In addition, CMS developed educational materials for MACs and legacy contractors, including handbooks that outlined CMS’s policies on issues, such as required meetings and deliverables, and interaction with functional contractors. CMS hired an Implementation Support Contractor to assist in A/B MAC implementation. CMS contracted with Chickasaw Nation Industries (CNI) to conduct various tasks such as monitoring implementation status, performing risk assessments, reviewing the completeness and timeliness of the MACs’ status reports and meeting minutes, and bringing issues and suggestions to CMS regarding the implementation. CMS required the MACs to provide detailed plans and reports to facilitate implementation in each jurisdiction and to submit reports of lessons learned. CMS required that each A/B MAC submit a Jurisdiction Implementation Project Plan to detail overall transition plans and a Segment Implementation Project Plan to delineate transition work more specifically for each part of the transition, and that each DME MAC submit an Implementation Project Plan. The MACs were required to update these plans on a biweekly basis during the transition, including details of how each MAC would accomplish the requirements in their SOWs and the time frames for taking these steps. The agency also required that the MACs and CNI submit reports of lessons learned to provide insight on how to improve future transitions, and it requested that legacy contractors submit such reports. Documenting lessons learned helped some aspects of later implementations. For example, lessons learned documents provided to CMS revealed challenges associated with transferring records from legacy contractors to the MACs. Two contractors we interviewed noted that they had to sort through over 100,000 boxes of paper files, as either a legacy contractor or the incoming MAC. One of these contractors reported that these files spanned multiple jurisdictions and estimated that it would cost $11 million to sort and move these records, which they described as unlikely to ever be needed. CMS told us that based on lessons learned documents they recognized a need to begin planning for file transfer as soon as possible. CMS began suggesting that the MACs bring a file transfer plan to their first meeting with the outgoing contractor, and the outgoing contractor bring descriptions of its current files, organization and volume, and file search and retrieval methods. CMS made midcourse corrections to facilitate its response to bid protests. As of July 2009, CMS reported that bid protests had been filed in 11 of the 19 jurisdictions and as of September 2009, 6 jurisdictions still had final award decisions pending. Bid protests delayed implementation of 3 of the 6 MAC jurisdictions that we reviewed. CMS indicated that responding to bid protests was very time consuming for staff. As CMS gained experience with MAC-related bid protests, the agency told us that it made changes to better respond to them. For example, because CMS initially assigned the same staff to work on procurements for several jurisdictions at a time, when a bid protest occurred in one jurisdiction, CMS shifted staff resources to manage the protest, ultimately delaying award decisions for other jurisdictions not under bid protest. In response, CMS established separate jurisdiction-based review panels, which allowed other staff to continue their work in jurisdictions that were not involved in bid protests. In addition, CMS identified a need to improve its management of MAC- related proposal evaluation documents. The agency has since hired an outside contractor that provided a tool to assist with managing the agency’s documentation of proposal assessments in Cycles 1 and 2 to help the agency respond more quickly should a bid protest occur. While MACs we interviewed generally described CMS’s facilitation steps as helpful, we identified certain agency decisions that led to challenges for the implementation of Medicare contracting reform. Some of these decisions, for example, caused delays in payments to providers. CMS sometimes, but not always, used lessons learned from MACs and legacy contractors to make midcourse adjustments to decisions that initially led to challenges. The decisions we identified include the following: CMS underestimated the number of appeals and provider call volumes, and the legacy contractors did not reduce their appeals workloads to target levels before the MAC transitions. Three of the six MACs we interviewed and CMS reported that some legacy contractors turned over a larger-than-expected appeals workload, resulting in delays in resolving appeals and, in some cases, higher customer service call volumes than CMS estimated. (See table 1.) CMS required all legacy contractors to set workload processing goals to reduce the number of appeals transferred to the MACs during the transition, and to submit weekly and monthly workload reports. CMS reviewed these reports to check whether the legacy contractors were meeting their workload processing goals. Despite regular workload monitoring during the MAC transitions, CMS underestimated the appeals and provider call volumes. As a result, three MACs we interviewed and CMS reported large appeals workload backlogs for several months, leading to delays in resolving appeals, which in some cases led to more calls from providers with unresolved appeals. CMS officials reported that, in some cases, they required the MACs to take corrective actions, such as hiring additional staff, which two contractors noted were most often temporary staff. The agency also reported having revised workload estimates for subsequent MAC transitions. According to CMS staff, concern that legacy contractors might terminate their contracts prior to the MAC transition and cause claims payment disruptions contributed to a decision by the agency to pay incentive bonuses to 15 legacy contractors, which as of July 2009 were worth a total of about $5 million. However, CMS officials reported that, as of July 2009, payment of these bonuses was not contingent upon legacy contractors meeting specific workload reduction metrics and therefore was not used as a mechanism to ensure that legacy contractors reduced their workloads to specified levels prior to the MAC transition. For example, a bonus was given to a legacy contractor in a jurisdiction we reviewed in which the new MAC reported inheriting larger-than-expected numbers of appeals from the legacy contractors. The concurrent implementation of the MAC transition with other Medicare initiatives caused payment delays and other operational challenges. CMS reported that it accelerated MAC implementation to prevent potential disruptions in claims processing if legacy contractors that were not awarded MAC contracts terminated their operations prematurely. We first noted concerns about CMS’s accelerated schedule in our 2005 report, including that CMS had not integrated the planning and scheduling of MAC implementations with other initiatives. CMS did not agree with our 2005 recommendation to extend its MAC implementation schedule to allow more time for planning and midcourse adjustments. Instead, CMS reported to us that it had established a team that developed integrated schedules for the MAC implementation and other major Medicare initiatives across the agency. (See fig. 3.) Overlapping initiatives that posed particular challenges for the MAC transition included the establishment of a new standard unique provider identification number, the National Provider Identifier (NPI), which providers had to use to be paid; CMS’s establishment of Enterprise Data Centers (EDC) to house Medicare claims-processing software systems beginning in March 2006; and CMS’s implementation of the Healthcare Integrated General Ledger Accounting System (HIGLAS), a new CMS financial management system designed to incorporate information from contractor and agency financial ial transactions, including claims payment, beginning in May 2005. transactions, including claims payment, beginning in May 2005. EDCs (3/2006–7/2009) EDCs (3/2006–7/2009) MAC mandated deadline (October 2011) CMS Action: Implement HIGLAS, CMS Action: Implement HIGLAS, CMS’s new financial management CMS’s new financial management and accounting system designed to and accounting system designed to incorporate information from incorporate information from contractor and agency financial contractor and agency financial transactions including claims transactions including claims payment. payment. CMS Action: Implement CMS Action: Implement NPI program, which assigns NPI program, which assigns a unique identification number that a unique identification number that is required for claims payment to is required for claims payment to health care providers. health care providers. CMS Action: Consolidate and CMS Action: Consolidate and transition the EDC, which are transition the EDC, which are data centers that house Medicare data centers that house Medicare claims processing software claims processing software systems, reducing the total systems, reducing the total number from more than 20 number from more than 20 different facilities to 2. different facilities to 2. HIGLAS is a major CMS initiative to modernize Medicare’s accounting and financial management systems by creating a single, integrated financial accounting system to be used by CMS and all Medicare contractors. CMS began planning the transition to HIGLAS in 2001 to satisfy the objectives of the Federal Financial Management Improvement Act and the Joint Financial Management Improvement Program, but implementation was delayed until the MAC implementation began in 2005. In particular, the concurrent MAC and NPI implementations led to provider enrollment workload backlogs and, subsequently, claims payment and processing challenges, as providers were not paid until provider enrollment applications were processed. According to CMS, three of the six MACs we studied inherited a backlog of unprocessed provider enrollment applications from the legacy contractors (see table 2), which led to claims payment delays. CMS officials told us they were aware that the mandated deadline for legacy contractors and MACs to enroll providers in the NPI program overlapped with the MAC transition schedule in five jurisdictions, but acknowledged that they did not initially understand the full effect that the overlap with the NPI implementation would have on the MAC transitions. To a lesser extent, CMS reported other MAC operational challenges implementing the EDCs and HIGLAS in conjunction with the MACs. According to CMS officials, implementing the EDCs in conjunction with the MACs was challenging, in part because each legacy contractor had unique claims-processing system features that either had to be consolidated in the broader MAC jurisdictions or discontinued. For example, a legacy contractor may have configured its claims-processing system with 150 to 200 unique computer applications, each with specialized functions. CMS reported establishing a workgroup that was responsible for reviewing these unique claims-processing system applications and determining whether or not the application would be transferred to the EDC for use by the new MAC. CMS officials said implementing HIGLAS in conjunction with the MACs was most problematic in MAC jurisdictions where some legacy contractors had transitioned to HIGLAS prior to the MAC transition and others had not. For example, in one case a MAC had to maintain two financial management systems temporarily until the entire jurisdiction was converted to HIGLAS. CMS’s HIGLAS timeline initially required legacy contractors to convert data into HIGLAS format just before cutover to the MAC. In response to these and other challenges, officials reported that the agency is now implementing HIGLAS on a jurisdiction-by-jurisdiction basis after the transition to each MAC is complete. Prior to the MAC transition, CMS did not adequately monitor legacy contractors’ implementation of mandated claims-payment policy changes, generating unanticipated work for the MACs and causing provider relations challenges. In four of the six MAC jurisdictions we studied, CMS or the MAC told us that the MACs discovered and corrected claims-processing errors made by legacy contractors, which, in some cases, had generated improper payments to providers and added additional work for the MACs in order to make the corrections themselves during the transition. These errors were largely due to legacy contractors not properly implementing certain CMS payment policies and revealed that CMS had not routinely checked to ensure that legacy contractors were making changes required by CMS to pay claims correctly. Although discovering and correcting these errors eventually led to more accurate Medicare payment, the errors generated unanticipated work for the MACs and caused provider relations challenges. For example, CMS reported that a legacy contractor had made improper payments to providers for scheduled, nonemergency ambulance transportation, a service covered by Medicare only under limited circumstances, which was discovered by the MAC. The MAC corrected the error, stopping improper payments to providers. A MAC in another jurisdiction told us that a legacy contractor had paid claims that should not have been paid for decades because it had not fully implemented requirements for certain edits related to rented equipment maintenance and service. In another example from the same jurisdiction, CMS reported that the MAC discovered the legacy contractor’s system was ineffective in ensuring that a provider had submitted documents required for payment. The HHS Office of Inspector General estimated that, because of this claims-processing error, Medicare paid approximately $127 million to providers who had not submitted the required documentation in 2006. Although CMS officials told us the agency monitored the legacy contractors through periodic reviews of contractor edits, it did not discover or correct these particular errors; instead, the MACs did. CMS allowed local Medicare coverage policy to be consolidated to a stricter standard in a region and did not require MACs to make this change clear, causing payment denials providers did not anticipate. Originally, CMS instructed MACs to select the “least restrictive” local coverage determination already in place in the jurisdiction. CMS later changed its guidance to advise MACs to implement the “most clinically appropriate” local coverage determination in place in the jurisdiction, because a legacy contractor may not have had a policy in place for some topics. This led some providers to face more restrictive coverage determinations than they had prior to the MAC transition. For example, one provider group we interviewed reported that the incoming MAC instituted documentation criteria for treating a type of skin lesion that had not been required by the legacy contractor. Two of the three provider groups we interviewed reported that there was a lack of clear communication about this change in guidance, which caused confusion once the local coverage determinations were finalized and claims were rejected. In addition, in two jurisdictions we studied where the MACs invited providers to comment on more than 100 draft policies, two provider groups we interviewed said draft policies would be clearer if they identified which areas were changes from old policies, and one provider group said physicians’ time constraints made it difficult to review such large volumes of information. CMS did not initially require Joint Operating Agreements (JOA) between MACs and all functional contractors, resulting in communication challenges between MACs and some key functional contractors. Initially, CMS did not require JOAs—agreements that establish roles and responsibilities—between MACs and all related contractors. CMS officials noted that in considering JOA requirements, the agency determined whether a JOA was appropriate for each particular MAC relationship. Specifically, CMS initially did not require JOAs between MACs and the EDCs, but did require JOAs between MACs and certain other contractors, such as the Beneficiary Contact Center, which runs the 1-800-MEDICARE help line for beneficiaries. One MAC we interviewed noted that it was unsuccessful in communicating directly with the EDC in its jurisdiction because there was no JOA in place. Instead, it had to direct all communication to CMS officials, who would then contact the EDC on behalf of the MAC. CMS made a midcourse correction to address this inefficiency, and required JOAs between MACs and all functional contractors (including EDCs) in implemented jurisdictions. CMS informed us that as of February 2009, these JOAs had been executed or were in progress. CMS has developed a performance assessment program for MACs that includes three reviews—the Quality Control Plan review, the Quality Assurance Surveillance Plan review (QASP), and the Award Fee Plan review. As of March 2009, CMS had completed all three reviews for three of the MACs in our sample. CMS’s on-site visits in 2007 and 2008 to review implementation of the MACs’ Quality Control Plans found that two of the three MACs’ plans required modification, which those MACs provided to CMS. Although CMS’s QASP evaluations indicated improvement from the first review period to the most recent review period we examined, the three MACs whose evaluations we examined did not meet all the QASP performance standards. Award Fee Plan reviews by CMS also indicated improved performance, based on the incentive metrics that the MACs met and the total award fee percentage they earned from the first review period to the most recent review period we examined. However, because the MACs did not meet all incentive metrics, they did not receive full award fees for which they were eligible. CMS developed the MAC Performance Assessment Program to include three reviews—the Quality Control Plan review, the QASP review, and the Award Fee Plan review. (See fig. 4.) CMS designed these reviews in part to reflect MMA and FAR requirements for assessing contractor performance. For example, CMS developed an annual Medicare Contractor Provider Satisfaction Survey in 2005 and used the survey results to develop a QASP performance standard in order to meet the MMA requirement for a performance standard related to provider satisfaction. CMS selects performance standards in accordance with the statement of work to develop the QASP for each MAC annually. CMS annually develops an Award Fee Plan for each MAC that contains incentives to achieve superior performance. In order to obtain the full amount of the potential award fee, a MAC must meet or exceed every metric. CMS conducts a review to assess MAC compliance with the QASP performance standards. CMS conducts a review to determine performance against metrics. CMS conducts an on-site visit to determine whether the MAC has operationalized its Quality Control Plan. The QASP has three parts: (1) the roles and responsibilities of CMS staff; (2) a summary of the performance standards and the methods CMS will use to determine whether a MAC is meeting them; and (3) an excerpt from the FAR that lists policies and procedures for contract quality assurance. CMS makes an award fee determination based on its review of the MAC’s performance. CMS officials told us that although they are conducting Award Fee Plan reviews annually as of April 2009, in previous years some reviews were conducted semiannually. The MAC Performance Assessment Program is supplemented by ongoing monitoring activities carried out by staff from various CMS divisions. These activities include communicating with MAC staff, such as conducting biweekly telephone meetings with MACs, and reviewing MAC audits and monthly status reports to oversee contractor performance. As noted in each MAC’s statement of work, MACs are required to submit monthly status reports that include information related to problems and risks encountered during the review period and the actions taken to address the problems. Quality Control Plan Review: A Quality Control Plan is submitted to CMS and is designed to describe the plans, methods, and procedures—or internal controls—that a contractor will use to meet performance standards in the statement of work such as those related to quality, quantity, time frames, responsiveness, and customer satisfaction. The plan details how the MAC intends to meet CMS’s seven required quality-control program elements outlined in the statement of work: (1) maintaining an inspection and audit system; (2) establishing a method of identifying deficiencies in services performed; (3) developing a formal system to implement corrective action; (4) documenting procedures and processes for services to ensure that services meet contractor performance requirements; (5) documenting a change-management program that ensures correct procedures and processes are followed when implementing CMS-required changes resulting from legislation, litigation, and policy; (6) providing a file to CMS of all quality records relating to inspections and audits conducted by the contractor and the corrective action implemented; and (7) providing for CMS inspections and audits. Lack of a fully functioning Quality Control Plan can potentially weaken a MAC’s internal controls. A MAC is required to submit its Quality Control Plan to CMS for review no later than 45 days after the contract is awarded. CMS is to conduct an on- site visit to examine implementation of the Quality Control Plan after the MAC has become fully operational to determine whether the MAC’s internal controls are in place. CMS reports the results of its review in the Quality Control Plan Review Report, and if the plan is deemed satisfactory, it is officially accepted by the agency. Quality Assurance Surveillance Plan (QASP) Review: The QASP has three parts: (1) an outline of the roles and responsibilities of CMS staff involved in the QASP review, (2) a summary of the QASP performance standards CMS developed in accordance with the statement of work and a description of the methods the agency will use to determine whether a MAC is meeting them, and (3) an excerpt from the FAR that lists policies and procedures for contract quality assurance. CMS categorizes the QASP performance standards according to several “functional areas,” or areas of Medicare operation. CMS has flexibility in choosing functional areas with applicable performance standards to use for each of the review periods, which have ranged from 6 months to 1 year. For example, CMS may choose performance standards in financial management, a functional area that relates to a MAC’s financial reporting activity, including ensuring the effective and efficient use of Medicare funds. CMS is to use the QASP review to evaluate a MAC’s performance against a subset of performance standards in accordance with the statement of work. According to CMS, if a MAC does not meet a performance standard, the agency requires an action plan to address the deficiency. The CMS project officer communicates the action plan request to the MAC. If the CMS project officer and other CMS staff agree that there are extenuating circumstances, the requirement for the action plan can be waived. However, a written justification for the waiver must be documented. (See app. V for additional information on the QASP review.) Award Fee Plan Review: The Award Fee Plan is CMS’s method for providing financial incentives to MACs based on their performance. CMS creates an Award Fee Plan for each MAC annually, for review periods that have ranged from 6 to 12 months. For each MAC Award Fee Plan, CMS develops incentive metrics. CMS officials explained that MAC award-fee incentive metrics are generally designed to be more challenging than the standards outlined in the statement of work in order to provide incentives for the MACs to exceed those standards. For example, the claims- processing timeliness metric states that the MAC will process 97 percent of clean claims within statutorily specified time frames, a level that is set higher than the standard in the statement of work, which states that the MAC must process 95 percent of clean claims in these time frames. CMS assigns a value to, or weights, each metric to determine what percentage of the award fee can be earned by the MAC for that metric. (See app. VI for a listing of the weights assigned to each incentive metric for the three MACs we studied.) CMS uses the Award Fee Plan review to assess a MAC’s performance against each metric to determine the amount of the award fee earned for that metric. If a MAC does not meet some of its incentive metrics, it may still receive an award fee for other metrics that it meets or partially meets. For example, if CMS assigned a value of 8 percent to the claims- processing timeliness metric, and this was the only metric the MAC met, then the MAC would receive 8 percent of the total award fee. According to CMS, agency officials can change incentive metrics every review period, depending on which aspects of the MAC’s performance need to be emphasized during that period. For example, CMS officials stated that one MAC may not initially have a provider enrollment incentive metric in its Award Fee Plan, but agency officials can incorporate it in a subsequent review period if they want the MAC to improve in this area. In determining the award fee, CMS also considers overall contract performance, such as the QASP results and other CMS monitoring activities. (See apps. V and VI for additional information on the Award Fee Plan review.) In examining implementation of the Quality Control Plans during its on- site visits to the MACs in 2007 and 2008, CMS found that the plans of two of the three MACs whose reviews we examined required modifications. For example, CMS’s Quality Control Plan Review Report for one MAC indicated an inconsistency in the contractor’s process for closing action plans in its Part B Overpayments Recovery area. The MAC management staff agreed to modify the action plan process and a CMS official confirmed that the MAC submitted a revised Quality Control Plan, which the agency accepted. CMS’s Quality Control Plan Review Report for the other MAC indicated that modifications were needed to help mitigate risks to the agency and its beneficiaries. For example, there were problems with its process for identifying and reporting deficiencies and managing corrective actions, such as the lack of a formal system for implementing corrective actions. A CMS official told us that the MAC submitted a revised Quality Control Plan, which the agency accepted. CMS’s QASP reviews for the three MACs showed that they had improved their performance from the first review period to the most recent review period we reviewed but did not meet all standards in any one review period. As of March 2009, CMS had completed two or three QASP reviews for each of the three MACs we studied. While the three MACs met from 41 to 67 percent of their performance standards in their first review periods, by the later review periods, each MAC had met a higher number of performance standards, achieving 52 to 75 percent of standards met. (See fig. 5.) None of the three MACs met all of its QASP performance standards in any review period, however. Specifically, CMS found that these MACs did not meet a number of QASP performance standards in six of the nine functional areas reviewed during those periods. (See app. VII for details on the QASP performance of the three MACs, including which functional areas were reviewed.) Performance was generally poorest in the functional areas of Appeals and Medicare Secondary Payer. For example, CMS indicated that one MAC experienced challenges in some functional areas, such as Appeals, that hindered its ability to meet relevant performance standards. The project officer requested an action plan that outlined how the MAC intended to work down the appeals backlogs. CMS’s Award Fee Plan reviews showed that each of the three MACs improved its performance on incentive metrics from its initial review period to its later review period. As is shown in figure 6, both the percentage of incentive metrics met and the percentage of the total award fee earned increased. Each MAC was paid less than half of the full award fee for which it was eligible in its first review period, but earned a higher percentage in subsequent periods for metrics it met. For example, MAC III met two of seven metrics, or 29 percent, and received 47 percent of the full award fee in the first review period. For its second review period, it met four of seven metrics, or 57 percent, and received 60 percent of the full award fee. By its last review period, the MAC met seven of eight metrics, or 88 percent, and was paid 86 percent of the full award fee. While all three MACs received a portion of the award fees for which they were eligible as a result of the incentive metrics they met in their Award Fee Plan reviews, they did not meet some incentive metrics, particularly metrics in areas related to beneficiary and provider service. All three of the MACs consistently met or partially met the Contract Administration metric—a measure that assessed the contractors’ service to CMS in contract management, such as providing quality deliverables on time. However, in some cases, they did not meet some beneficiary and provider service metrics for superior performance in areas CMS assessed, such as (1) Provider Relations—Accuracy, which assessed the accuracy of responses to providers’ Medicare policy questions; (2) Claims Processing Timeliness; (3) Appeals; (4) Beneficiary Inquiries, which measured the timeliness of responses to beneficiaries; and (5) support to the Qualified Independent Contractor, a contractor that handles the second-level appeals of denied claims. (See app. VIII for details on the award-fee performance of the three MACs, including which areas were reviewed.) For example, MAC II did not meet more than half of the incentive metrics it was assessed against in its first and second review periods in areas such as Appeals, Beneficiary Inquiries, and Provider Relations—Accuracy. CMS has not tracked and provided information on all of its costs and savings related to Medicare contracting reform, and so the total costs and savings for Medicare contracting reform are uncertain. The agency has provided information on its external costs associated with establishing and supporting contracts, but has not provided information on its internal costs for conducting contracting reform activities, such as salaries. Similarly, CMS has not provided information on the total savings related to contracting reform. The agency provided information on some savings due to reductions in operational spending that it attributes to contracting reform and other activities related to claims payment; however, it has not provided information on what it had previously estimated would be the major source of savings, reduced improper payments to providers resulting from contracting reform. CMS tracked and provided information on contracting reform costs of about $300 million from fiscal year 2004 through fiscal year 2008, but could not readily account for certain internal administrative costs for implementing the MAC program, such as agency staff salaries and overhead. In response to our request for total costs of Medicare contracting reform, CMS provided information on external costs beginning in fiscal year 2005 for areas such as contractor transition and termination costs, provider surveys, contract support activities, and technology associated with contracting reform, including information management systems and developing the EDCs. Of the approximately $300 million in external costs CMS indicated was spent for contracting reform from fiscal year 2004 through fiscal year 2008, most (approximately $260 million) were incurred in fiscal year 2007 and fiscal year 2008. (See table 3.) From fiscal year 2004 through fiscal year 2006, CMS paid contracting reform costs out of a lump-sum appropriation for program management, as CMS did not receive appropriations specifically for contracting reform until fiscal year 2007. Funds that were appropriated for contracting reform for fiscal years 2007 and 2008 were available for 2 fiscal years, instead of the usual 1 fiscal year; these are referred to in this report as “2-year funding.” For both fiscal year 2007 and fiscal year 2008, CMS indicated that it spent less than the amount appropriated for contracting reform and carried over the unused portion of the funding to the next fiscal year. The appropriations act for fiscal year 2009 made $108.9 million available for contracting reform and designated it as 2-year funding. CMS did not include certain internal expenses as part of its accounting of Medicare contracting reform costs, leading to uncertainty about the total cost of the effort. In response to our request, CMS was able to compile selected internal costs for contracting reform in fiscal year 2008 totaling almost $661,000 and told us that, in general, the internal costs associated with contracting reform are small compared to the external costs. (See table 4.) The contracting-reform-related internal costs CMS provided information on for fiscal year 2008 included categories such as travel, overtime, training, and supplies, but did not include internal costs for agency staff salaries, including legal services to address bid protests, and overhead. CMS said that internal costs comparable to those it provided information on for fiscal year 2008 were not readily available for other years. In addition, CMS officials told us that the agency does not routinely track the internal costs such as staff salaries related to initiatives like contracting reform, mainly because CMS’s accounting system does not allocate payroll costs by specific project. Although CMS estimated that it would achieve savings from two sources— reduced spending on administrative functions and savings from the Medicare trust funds related to better claims review leading to reduced improper payments—the agency has provided information only on administrative savings, making the total amount of any savings and the extent to which they are due to contracting reform uncertain. In 2005, we reported that CMS expected contracting reform to generate savings totaling over $1.9 billion from reduced spending on Medicare administration and from reduced improper payments. However, as of April 2009, CMS was unable to quantify and provide information on total savings realized. Most of the estimated savings were expected to occur from funds it could avoid spending from the Medicare trust funds by reducing improper payments for Medicare services, with fewer savings anticipated from reducing administrative spending. As of April 2009, CMS had indicated to us reduced spending on operational activities that it considered administrative savings due to contracting reform. However, it had not provided information on any savings to the Medicare trust funds based on a reduction in improper payments due to contracting reform. As of November 2008, the estimated percentage of Medicare fee-for-service payments that were improper had been declining since fiscal year 2004 and CMS attributed some of the reduction in improper payments to contracting reform activities. However, according to CMS, the agency is not tracking savings to the Medicare trust funds from contracting reform and therefore is unable to quantify total savings. Further, in November 2009, CMS reported that an estimated $24.1 billion fee-for-service payments from April 2008 to March 2009 were improper, which was higher than its November 2008 estimate of $10.4 billion for claims paid from April 2007 to March 2008. CMS also reported that it had changed its methodology for conducting the error-rate measurement, which could make a trend comparison with the past years’ estimates unreliable. These changes make it more uncertain what savings to the Medicare trust funds, if any, may be due to contracting reform. Incongruence between the spending categories CMS used in its estimated savings in 2005 and the categories CMS used to provide information on reduced spending for selected Medicare operational activities from fiscal year 2005 through fiscal year 2008 makes it impossible to directly compare CMS’s estimated and actual savings to date. CMS indicated that spending for certain Medicare operational activities began decreasing in fiscal year 2006 and continued decreasing through fiscal year 2008. (See fig. 7.) The agency provided information to show a decrease in the annual operating cost of these Medicare operational activities from fiscal year 2005 through fiscal year 2008, when spending reached just over $1.8 billion. According to CMS, the agency spent nearly $280 million less for these selected Medicare operational activities in fiscal year 2008 than it did in fiscal year 2005, the year with the highest level of spending for these activities during this period. CMS indicated that savings as a result of reduced spending for these selected Medicare operational activities are due to several factors, including efficiencies gained from Medicare contracting reform. For example, CMS officials said that consolidation of program functions as a result of contracting reform led to cost reductions. Specifically, the agency noted that consolidating data processing functions under the EDCs, which CMS includes as part of contracting reform, resulted in lower operating costs than data processing in the legacy environment. In addition, CMS noted that increased competition led contractors to implement cost- cutting measures, such as site closures, to achieve a competitive advantage in obtaining a MAC contract. However, the agency was unable to quantify these savings specifically and to isolate the effects of contracting reform on spending for operational activities from the effects of other activities related to claims payment. Therefore, it could not quantify the extent to which these and other examples of reduced spending were due to Medicare contracting reform, resulting in uncertainty about savings due specifically to contracting reform. We provided a draft of this report to HHS for comment and received written comments from the agency, which are reproduced in appendix IX. We also solicited comments on our draft report from representatives of the six MACs in our sample as well as the three provider associations we interviewed. Of those invited to review the draft report, three MAC representatives accepted and provided oral comments to us. In addition to the overall comments discussed below, we received technical comments from HHS and MAC representatives, which we incorporated as appropriate. We obtained written comments on our draft report from HHS, on behalf of CMS. HHS generally agreed with our draft report findings and praised GAO for recognizing the progress CMS has made in implementing Medicare fee-for-service contracting reform. In response to the draft report’s discussion on the implementation of Medicare contracting reform, HHS indicated that it agreed with our finding that CMS took several steps to implement contracting reform, particularly noting that it was one of the most complex operational initiatives that the agency has ever undertaken. HHS also generally agreed with our finding regarding CMS’s performance assessments of three MACs whose reviews we examined. In one of its technical comments, HHS noted that there are other performance-related reviews it considers when evaluating MAC performance that we did not highlight in the draft report. These reviews relate to a broader set of activities than those within the scope of the report; we focused specifically on the three key reviews administered through the MAC Performance Assessment Program because CMS officials reported to us that these reviews are the key components of the program. Finally, HHS generally agreed with our finding regarding the uncertainty of the total costs and savings for contracting reform. HHS noted, however, that CMS provided us with information supporting reduced spending on Medicare fee-for-service operations after 2005 that was not fully captured in the draft report. Our draft report included information that showed accrued savings due to reduced spending on Medicare fee-for-service operational activities after fiscal year 2005; however, we excluded fiscal year 2009 information because, at the time of our review, CMS reported fiscal year 2009 costs as estimates. We also noted that CMS was unable to isolate the effects of contracting reform spending for Medicare operational activities from the effects of other activities related to claims payment. The three MAC representatives who reviewed the draft report generally agreed that it accurately reflected challenges during the implementation of Medicare contracting reform. Two of the MAC representatives provided additional detail on the challenges created because CMS and the outgoing contractors did not accurately estimate workloads during the transitions. In addition, they elaborated on the challenges created by CMS’s concurrent implementation of the MAC transition with other Medicare initiatives, such as NPI and HIGLAS. One representative attributed some of the workload increase to a failure by providers to apply for their new NPIs by the national deadline. Another MAC representative indicated that once the transition challenges began, CMS responded quickly and efficiently to address them. However, this representative also stated that he expected more discussion in the draft report of the MAC procurement process, particularly the delays and uncertainties resulting from the bid protests in some jurisdictions. Our report focused on the MAC jurisdictions where a final award had been made by June 2008 rather than on the procurement process leading up to the MAC awards. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretary of Health and Human Services and other interested parties. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or at kingk@gao.gov. Contact points for our Office of Congressional Relations and Office of Public Affairs can be found on the last page of this report. Other major contributors to this report are listed in appendix X. To determine how the Centers for Medicare & Medicaid Services (CMS) implemented Medicare contracting reform, we selected a sample of 6 Medicare Administrative Contractor (MAC) jurisdictions for in-depth review from among the 10 where a final award had been made by June 2008. The 6 MAC jurisdictions that we selected for review in this engagement were among the earliest to be implemented, and thus had the longest experience from which we could learn about implementation and performance assessment. Our criteria for selecting the 6 MAC jurisdictions were designed to ensure diversity in geographic region, in the volume of claims workload, in the complexity of transition (such as the number of legacy contractors in the region whose workloads had to be transitioned to a single MAC), bid protest experience, and CMS’s assessment of a jurisdiction’s risk for fraud. For example, we selected MAC jurisdictions based on areas CMS selected for its demonstration projects that targeted fraudulent business practices. The sample includes 2 MACs that process durable medical equipment claims (DME MAC) and 4 MACs that process both Part A and Part B Medicare claims (A/B MAC). We also examined documents and conducted interviews with CMS officials. Specifically, we reviewed documents including CMS’s acquisition strategy, requests for proposals, implementation handbooks, MAC monthly status reports, and CMS’s planning tools such as timelines and maps. We interviewed CMS staff responsible for coordinating the contract procurement for, and implementation of, the 6 MAC jurisdictions in our sample, as well as the Implementation Support Contractor CMS hired to assist it in implementing the A/B MACs. A division within GAO, separate from the division that conducted this review, is responsible for resolving certain federal contract protests. Given its role, we did not assess the solicitation or award of the MAC contracts. In addition, for the 6 jurisdictions we selected, we interviewed incoming MACs and certain legacy contractors. We also interviewed health care provider organizations located in three states within 2 of the 6 MAC jurisdictions in our sample, including three state medical organizations. We selected provider organizations for interviews based on whether contractors or CMS officials we interviewed specifically mentioned them as having raised concerns about the MAC implementation. In addition, to understand the national scope of contract reform implementation issues from the provider perspective, we gathered information from national medical, hospital, and other provider organizations, including the American Medical Association, the American Hospital Association, and the American Health Care Association. Finally, we analyzed documents and conducted interview understand what lessons CMS may have learned that may help inform future award cycles. CMS completed its pilot of the annual provider satisfaction survey in 2005. The survey is designed to measure provider satisfaction with key services performed by Medicare fee-for- service contractors, such as the accessibility of provider education and training from a MAC. MACs; however, we did analyze the results of CMS’s reviews that h conducted for DME and A/B MACs as of March 2009. To determine CMS’s c we reviewed and analyzed documents related to CMS’s budget, estimated costs, and estimated savings, and interviewed CMS officials. Specifically, we reviewed documents including CMS’s budget justifications for fiscal years 2005 through 2009; appropriations acts for fiscal years 2004 through 2009; and CMS data on estimated savings, transition and terminat and other costs associated with contracting reform. We also interviewed CMS officials responsible for development and oversight of contra reform budgets and estimates of potential costs and savings to understand CMS’s process for tracking and reporting the financial status of contracting reform. Further, we reviewed criteria for good governance practices to determine the importance of complete information on the costs of federal programs and activities for the effective management of government operations and for assisting Congress and internal and external users in assessing the operating performance and stewardsh program activities. To assess the reliability of CMS-reported internal an external cost data for contracting reform and CMS-reported spending d ata for selected Medicare operational activities, we conducted interviews withosts and savings for Medicare contracting reform, knowledgeable agency officials and reviewed for reasonableness the assumptions associated with the collection and compilation of the costs and savings data. Based on these reviews and discussions, we found the data reliable for the purposes of this report. We conducted this performance audit from May 2008 through March 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The Medicare Administrative Contractor (MAC) Performance Assessment Program comprises three reviews—the Quality Control Plan review, the Quality Assurance Surveillance Plan (QASP) review, and the Award Fee Plan review. This appendix provides supplementary information about the QASP and Award Fee Plan reviews. In implementing a QASP review, Centers for Medicare & Medicaid Services (CMS) staff who interact with the project officer include business function leads and technical monitors. Business function leads are responsible for determining the QASP performance standards and for deciding whether the review will consist of an on-site visit or a desk review. They are subject matter experts in Medicare functional areas, such as claims processing. They inform project officers of performance-related issues and identify areas that require closer inspection at on-site visits. CMS officials told us that, for a given MAC, if there is a significant amount of data to be reviewed at the contractor site, they will make an on-site visit, or if the MAC’s performance information is available through a CMS system, they will do a desk review. Technical monitors are responsible for conducting the QASP reviews for their specialty functional area. According to CMS officials, technical monitors support project officers by assessing MAC performance and reporting their findings to the project officers. They summarize the results of their reviews in a report that outlines whether a MAC met the standards. The Award Fee Plans consist of subjective and objective incentive metrics. Subjective metrics can be classified as met, partially met, or not met, whereas objective metrics can be classified as met or not met. For the first contract year, each Award Fee Plan included a contract- administration metric—the only subjective incentive metric in the plan. This metric assesses the MAC’s efforts in contract management and providing service to CMS, such as maintenance of the appropriate level of staff to perform duties outlined in the statement of work, cost management, communication, and submission of deliverables like the Quality Control Plan to the agency on time. For this metric, the MACs can receive all, some, or none of the award fee specifically allocated for it, using a point scale the agency developed. In addition, agency officials tol us that they selected objective incentive metrics in functional areas they considered to be the most important for new MACs, such as claims- processing timeliness and beneficiary and provider relations. For each objective metric, a MAC can receive all or none of its award fee for that metric, but generally cannot receive a partial fee. The Centers for Medicare & Medicaid Services (CMS) assigns a value to, or weights, each metric in an Award Fee Plan to determine what percentage of the award fee can be earned by a Medicare Administrative Contractor (MAC) for that metric. Figure 8 of this appendix highlights the weights S assigned to the Award Fee Plan incentive metrics for three MACs CM assessed from 2006 through 2008. The Centers for Medicare & Medicaid Services’s (CMS) Quality Assurance Surveillance Plan (QASP) reviews for three Medicare Administrative Contractors (MAC) showed that they had improved their performance from the first review period to the most recent review period we reviewed but did not meet all standards in any one review period. Figure 9 of this appendix provides details on each MAC’s QASP performance assessed from 2006 through 2008, including which functional areas were reviewed. The Centers for Medicare & Medicaid Services’s (CMS) Award Fee Plan reviews for three Medicare Administrative Contractors (MAC) showed that they had improved their performance from the first review period to the most recent review period we reviewed, but because the MACs did not meet all incentive metrics, they did not receive full award fees. Figure 10 ludes information about the award fee earned by each of this appendix inc MAC and the incentive metrics the MACs were assessed against from 2006through 2008. In addition to the contact named above, Sheila K. Avruch, Assistant Director; Jennie F. Apter; La Sherri Bush; Jill Center; Helen Desaulniers; Sarah-Lynn McGrath; Roseanne Price; Kristal Vardaman; Ruth S. Walk; Jennifer Whitworth; and William T. Woods made key contributions to this report. | The Medicare Prescription Drug, Improvement, and Modernization Act of 2003 significantly reformed contracting for payment of Medicare's $310 billion per year in fee-for-service claims. The Centers for Medicare & Medicaid Services (CMS) is transitioning claims administration to 19 new entities known as Medicare Administrative Contractors (MAC) and plans to complete the process ahead of October 1, 2011, the date required by law. In 2005, GAO reported that CMS's plan to accelerate the transition could create challenges and was based on estimated costs and savings that were uncertain. In this report GAO examined (1) how CMS has implemented Medicare contracting reform; (2) how CMS assessed the performance of the MACs and what the results of its assessments have been; and (3) what CMS's costs and savings have been for Medicare contracting reform. GAO selected a sample of 6 transitions to review from among the 10 MAC contracts awarded as of June 2008, based on factors such as geographic diversity, volume of claims workload, and transition complexity. GAO analyzed CMS documents related to the MAC transitions, including performance assessments for 3 of the 6 MACs in the sample that had results available for three types of reviews as of March 2009, and interviewed CMS officials, contractors, and provider groups. CMS took numerous steps to facilitate the complex implementation of Medicare contracting reform, but certain decisions led to challenges during the six MAC transitions we reviewed, such as payment delays to providers. For example, CMS's accelerated implementation schedule overlapped with other Medicare initiatives that affected claims processing, such as requiring that providers re-enroll in order to be paid, which resulted in claims payment delays. In addition, despite regular workload monitoring of the former contractors during the MAC transitions, CMS gave the MACs inaccurate workload estimates. For example, one MAC originally planned on receiving 15,000 appeals cases but actually inherited 46,500 cases, which led to processing backlogs and delayed payments to providers. However, CMS also incorporated lessons learned and made midcourse adjustments to address some of these challenges. CMS has assessed the MACs using a program it developed, and in the reviews we examined the MACs did not meet all standards and metrics. CMS's assessment program includes an initial review of each MAC's internal controls and two subsequent reviews to assess performance. One of these reviews compares a MAC's performance to standards in accordance with its contract and the other provides an incentive award fee if the MAC meets selected metrics that are designed to reflect high performance. Results available as of March 2009 from the assessments of three of the six MACs in GAO's sample show that the three MACs improved their performance over time but did not meet all metrics. For example, while the three MACs consistently met or partially met a metric that assesses contract management, they did not meet some beneficiary and provider service metrics. In addition, because they did not meet all incentive metrics, they did not receive full award fees. CMS's total costs and savings to date for Medicare contracting reform are uncertain because CMS does not track and provide information on all related costs and savings. The agency provided information on costs associated with contracts, which totaled a little over $300 million for fiscal years 2004 through 2008. It also provided information on some internal agency costs for conducting contracting reform, but did not track others, such as agency staff salaries. Although CMS expected contracting reform to generate substantial savings from reduced spending on administrative functions and savings to the Medicare trust funds due to improved claims review to detect payments that should not be made, as of April 2009, CMS was unable to provide information on total savings. CMSprovided some information on savings due to reductions in operational spending, but the extent to which these savings were attributable to contracting reform is uncertain. CMS did not track or provide information on savings to the Medicare trust funds due to reduced improper payments related to contracting reform activities. CMS reviewed a draft of this report and generally agreed with GAO's findings. |
Each service academy operates its own preparatory school. The U.S. Air Force Academy Preparatory School is co-located with the U.S. Air Force Academy in Colorado Springs, Colorado. The U.S. Military Academy Preparatory School is located at Fort Monmouth, New Jersey, and the U.S. Naval Academy Preparatory School is located in Newport, Rhode Island. (See fig 1.) During World War I, the Secretaries of the Army and the Navy nominated enlisted personnel to their respective service academies. Many of the first enlisted personnel did poorly on service academy entrance examinations, and many of the slots that were created for them went unfilled. To coach enlisted nominees for service academy entrance examinations, Army and Navy officials formally established the Military Academy and Naval Academy preparatory schools in 1946 and 1920, respectively. (The U.S. Air Force Academy was created in 1954, and its preparatory school in 1961.) The preparatory schools have evolved over the years and become more diverse. Today, the student bodies of these schools consist of enlisted personnel, minorities, recruited athletes, and women (see table 1). To be admitted to a preparatory school, an applicant must meet basic eligibility requirements. Because applicants to the academies must (1) be unmarried, (2) be a U.S. citizen, (3) be at least 17 years of age and must not have passed their twenty-third birthday on July 1 of the year they enter an academy, (4) have no dependents, and (5) be of good moral character, the preparatory schools apply the same requirements. The preparatory schools do not charge for tuition. The enlisted personnel who are selected to attend the preparatory schools are reassigned to the preparatory schools as their duty stations, and these enlisted personnel continue to be paid at the grades they earned before enrolling. Civilians who are selected to attend the preparatory schools enlist in the reserves and are paid about $700 per month. Enlisted personnel must complete their military obligations if they do not complete the programs or go on to one of the academies. Civilian students do not incur any financial or further military obligation if they do not complete the programs or go on to one of the academies. However, they also do not accrue any transferable college credits while attending the preparatory schools. The preparatory schools offer a 10-month course of instruction that combines academic instruction, physical conditioning, and an orientation to military life. The daily schedule includes several hours of classroom instruction, mandatory study time, and extra instruction; time for athletics or physical training; and some instruction in military customs and practices. Emphasis is placed on giving each candidate as much tutorial assistance as is necessary to maximize the individual’s potential for success. The student body at each school is organized into a military unit with a student chain of command that is advised by commissioned and noncommissioned officers. This structure is intended to provide the students with exposure to military discipline and order. In fiscal year 2002, DOD reported that the total cost to operate all three preparatory schools was about $22 million (see table 2). We did not independently verify or evaluate these costs. OUSD/P&R, the service headquarters, and the service academies have established clear roles and responsibilities for oversight of the preparatory schools. According to DOD Directive 1322.22 (Service Academies), the Under Secretary of Defense for Personnel and Readiness has responsibility to assess the operations and establish policy and guidance for uniform oversight and management of the service academies and their preparatory schools. The service headquarters perform their oversight over their respective academies and preparatory schools in accordance with the directive. The superintendent of each academy reports directly to the uniformed head of his respective service (the Chiefs of Staff for the Army and the Air Force and the Chief of Naval Operations for the Navy), in accordance with the chain of command for each service. The academies perform the primary DOD oversight function for their respective preparatory schools. The commanding officers at the Air Force and Army preparatory schools hold the rank of colonel, and the head of the Navy’s preparatory school holds the equivalent rank of captain. They report directly to the superintendent of their respective service academies, in accordance with the chain of command for each service. Appendix II provides general information about the three service academy preparatory schools. The three preparatory schools’ current mission statements do not clearly define the purpose for which the schools are being used by their respective service academies. Mission statements should define an organization’s purpose in language that states desired outcomes. Mission statements also bring the organization’s vision into focus, explain why it exists, and tell what it does. Without a clear mission statement, the organization cannot establish goals that fully reflect the organization’s intended purpose. Although the preparatory schools exist to help the service academies meet their diversity needs, the schools’ mission statements simply refer to preparing “selected personnel who meet special needs,” “selected candidates,” or “candidates” for admission to and success at the service academies. These mission statements are not clearly aligned with DOD guidance, which states that primary consideration for enrollment shall be accorded to nominees to fill officer objectives for three target groups: (1) enlisted personnel, (2) minorities, and (3) women. Senior academy officials told us that their expectations of the preparatory schools are consistent with DOD guidance on enrollment objectives and that they also rely on the preparatory schools to meet their needs for a fourth group— recruited athletes—adding that the service academies would not be able to meet their diversity needs if the preparatory schools did not exist. However, neither DOD nor the service academies have required the preparatory schools to align their mission statements to reflect DOD’s guidance and the service academies’ expectations. As a result, none of the mission statements are explicit about the preparatory schools’ intended purpose. Table 3 presents more detailed information on the preparatory schools’ mission statements. Even though the mission statements are not explicit about the schools’ intended purpose, data on the number of students belonging to target groups who enter the preparatory schools and then enter the service academies indicate that, in practice, the schools are giving primary consideration for enrollment to those target groups identified by the DOD directive and the service academies—namely, enlisted personnel, minorities, recruited athletes, and women—and are primarily preparing those student groups for admission to the service academies. Preparatory school and service academy admissions data over a 10-year period indicate that the preparatory schools are a source for the academies of target groups—enlisted personnel, minorities, recruited athletes, and women—identified by DOD guidance and service academy officials. Average admissions data on the representation of targeted groups in the preparatory schools for preparatory school academic years 1993 through 2002 are shown in figure 2. (Appendix III contains detailed enrollment figures, by target group, for each of the preparatory schools.) Figure 3 shows the average percentage of each targeted group enrolled at the service academies that came from the preparatory schools for the same time period. We first identified this lack of clarity in mission statements in our 1992 report on the preparatory schools. In the 1992 report, we concluded that the preparatory schools’ missions were not clearly defined and that the preparatory schools appeared to be pursuing somewhat differing goals for the target groups of enlisted personnel, minorities, recruited athletes, and women—the primary groups the schools served at that time. We recommended that the Secretary of Defense determine what role the preparatory schools should play among the services’ officer production programs and direct the services to clarify their school missions accordingly. To address this lack of clarity, DOD indicated that it planned to work with the services to develop a consistent mission statement for these schools that would be approved by May 1992. As discussed previously, however, the preparatory schools’ current mission statements still do not clearly define the purpose for which the schools are being used by their respective service academies. It is difficult to evaluate how effective the preparatory schools have been in accomplishing their missions because the service academies have not established performance goals for their preparatory schools. The service academies rely on the preparatory schools to meet their targeted needs for enlisted personnel, minorities, recruited athletes, and women. The preparatory schools collect a substantial amount of performance data for these targeted groups. However, without mission-linked performance goals and measures, the service academies cannot objectively and formally assess these data to determine mission effectiveness. Without specific performance goals, there is no objective yardstick against which to gauge preparatory school effectiveness, as would be consistent with the principle of best practices for ensuring optimal return on investment. With performance goals against which to compare actual performance, an organization can gauge how effectively it is meeting its mission. To assess effectiveness in achieving its mission, an organization should establish performance goals to define the level of performance to be achieved by a program; express such goals in an objective, quantifiable, and measurable form; provide a basis to compare actual program results with performance goals; and report assessment results, including actions needed to achieve unmet goals or make programs minimally effective. The preparatory schools collect performance data, such as the number of students admitted to the schools, the types of students (enlisted personnel, minorities, recruited athletes, and women) admitted, and the number who entered and graduated from the academies. These descriptive data show, among other things, that during the past 10 years, an average of 76 percent of students enrolled at the preparatory schools graduated from them. Data for this same 10-year period show that a smaller percentage of all students admitted to the preparatory schools graduated from or are still attending the academies. For example, 51 percent of students who were admitted to the Air Force Academy preparatory school, 56 percent of students admitted to the Military Academy preparatory school, and 59 percent of students admitted to the Naval Academy preparatory school graduated from or are still attending their respective academies. Senior officials at the preparatory schools and academies stated that they are satisfied with these results. Figure 4 shows the average number of students who entered the preparatory schools, graduated from the preparatory schools, entered the academies, and graduated from or are still attending the academies for preparatory school academic years 1993 through 2002. Appendix IV provides more detailed information, for class totals and by target groups, on the percentage of students who entered the preparatory schools and graduated from or are still attending the academies between preparatory school academic years 1993 and 2002. Appendix V provides more detailed information, for class totals and target groups, on the percentage of students who graduated from the preparatory schools for that same time period. Appendix VI provides more detailed information, for class totals and by target groups, on the percentage of preparatory school graduates who accepted appointments to the academies. The service academies have not established quantified performance goals for their preparatory schools. However, they do have implicit expectations. Senior officials at both the preparatory schools and the academies told us that the preparatory schools are expected to enable preparatory school students to (1) meet the service academies’ academic standards and (2) graduate from the service academies at rates comparable to the rates of students who received direct appointments to the service academies. A 2.0 grade point average is the minimum level of academic performance accepted at the academies. Our analysis of academy data for the graduating class of 2002 shows that preparatory school graduates, as a group, exceeded the 2.0 grade point average but had slightly lower cumulative grade point averages than did the student body as a whole. Figure 5 shows the cumulative grade point averages for preparatory school graduates and service academy student bodies as a whole for the class of 2002. For preparatory school academic years 1993 through 1998, an average of 73 percent of preparatory school graduates who accepted appointments to the academies graduated from the service academies, while the average rate was 78 percent of students directly admitted to the academies for the same years. Thus, graduation rates for preparatory school graduates were slightly lower than the rates for students directly admitted to the service academies. The academies, however, do not have a performance target for graduation rates for preparatory school graduates, and therefore these rates do not necessarily represent the achievement of a desired outcome. Figure 6 shows the average percentage of preparatory school students who graduated from the academies and the average percentage of directly appointed students who graduated from the academies for preparatory school academic years 1993 through 1998. Appendix VII provides more detailed information for comparative graduation rates for preparatory school academic years 1993 through 1998 for each preparatory school. We first found that DOD had not established specific performance goals for the preparatory schools in our 1992 review on the service academy preparatory schools. In that report, we concluded that without such goals, DOD lacked the tools it needed to determine whether the schools were effective. DOD still has not required the academies to establish quantified performance goals that are clearly linked with the mission of the schools. The effectiveness of DOD, military service, and service academy oversight is limited because the existing oversight framework for assessing preparatory school performance does not include, among other things, performance goals and mission statements—as discussed in previous sections of this report—and objective measures against which to assess performance. An effective oversight framework includes tracking achievements in comparison with plans, goals, and objectives and analyzing the differences between actual performance and planned results. The interrelationship of these elements is essential for accountability and proper stewardship of government resources, and for achieving effective and efficient program results. Without formal goals and measures that are, moreover, linked to mission statements, oversight bodies do not have sufficient focus for their activities and cannot systematically assess an organization’s strengths and weaknesses or identify appropriate remedies to achieve the best value for the investment in the organization. OUSD/P&R, the services, and the service academies have established mechanisms to conduct oversight of the preparatory schools through DOD guidance established in 1994. OUSD/P&R is required to assess and monitor the preparatory schools’ operations based on the information provided in the annual reports it requires from the service secretaries. The service headquarters are responsible for oversight for their respective academies and preparatory schools, and they oversee the schools’ operations through the annual preparatory school reports that they submit to OUSD/P&R. These reports contain data on various aspects of preparatory school performance, such as student demographic trends, admissions trends, and attrition. The service academies exercise direct oversight of their respective preparatory schools and monitor the schools’ performance through ongoing collection of data required by OUSD/P&R. For example, each of the service academies collects preparatory school data such as the number of students admitted to the schools, the types of students (enlisted personnel, minorities, recruited athletes, and women) admitted, and the number who entered and graduated from the academies. DOD, the service headquarters, and the service academies, through these annual assessment reports, are able to compare aspects of preparatory school performance against prior period results. For example, service academy data show that over the past 10 years, 51 percent of students who were admitted to the Air Force Preparatory School, 56 percent of students admitted to the Military Academy Preparatory School, and 59 percent of students admitted to the Naval Academy Preparatory School graduated from or are still attending their respective academies. Other data reported by the preparatory schools show that the percentage of students in the target groups admitted to the schools has varied over the past 10 years. However, as mentioned in previous sections of this report, the preparatory schools lack quantified performance goals that are linked to clear mission statements. Without goals linked to clear mission statements, DOD, the service headquarters, and the service academies do not have an objective basis by which to judge the effectiveness of the preparatory schools’ performance of their missions. Although the service academy preparatory schools receive oversight from a number of organizations, they lack clear mission statements and quantified performance goals and measures. Thus, there is no objective yardstick against which to gauge preparatory school performance, consistent with the principle of best practices for ensuring optimal return on investment. This conclusion reiterates our 1992 report’s finding that the preparatory schools lacked clear mission statements and that DOD lacked the tools necessary to determine whether the schools were effective. We recommend that the Secretary of Defense direct the Under Secretary of Defense for Personnel and Readiness, in concert with the service headquarters and service academies, to clarify the preparatory schools’ mission statements by aligning these statements with the department’s guidance and the academies’ expectations, which target student groups for primary enrollment consideration; establish quantified performance goals and measures, linked with the schools’ mission statements; and enhance the existing oversight framework by using quantified performance goals and measures to objectively evaluate the performance of the preparatory schools. In commenting on a draft of this report, DOD concurred with our recommendations and indicated that the mission statements of the preparatory schools will be aligned with DOD guidance and service expectations and that quantitative goals will be established to create effective measures and appropriate standards for success. DOD added that the Office of the Under Secretary of Defense for Personnel and Readiness will review and analyze these statistics over time to ensure the successful performance of the preparatory schools. DOD’s comments are reprinted in their entirety in appendix VIII. We are sending copies of this report to the appropriate congressional committees; the Secretaries of Defense, the Army, the Navy, and the Air Force; and the Director, Office of Management and Budget. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Please contact me on (202) 512-5559 if you or your staff have any questions concerning this report. Key contributors are listed in appendix IX. To assess the adequacy of the mission statements of the preparatory schools, we interviewed officials at the following locations: the Office of the Under Secretary of Defense for Personnel and Readiness, Washington, D.C.; the U.S. Air Force Academy, Washington Liaison Office, Washington, D.C.; Headquarters, Department of the Army, Personnel, Washington, D.C.; Headquarters, Department of the Navy, Office of Plans and Policy, Washington, D.C.; the U.S. Air Force Academy, Colorado Springs, Colorado; the U.S. Military Academy, West Point, New York.; the U.S. Naval Academy, Annapolis, Maryland; the U.S. Air Force Academy Preparatory School, Colorado Springs, Colorado; the U.S. Military Academy Preparatory School, Fort Monmouth, New Jersey; and the U.S. Naval Academy Preparatory School, Newport, Rhode Island. We obtained and reviewed Department of Defense (DOD), service, service academy, and academy preparatory school guidance, service academy strategic plans and instructions, and preparatory school annual reports on operations and performance. Using data provided to us by the preparatory schools, we analyzed aggregate data for preparatory school academic years 1993 through 2002, by class totals and by four groups of students—enlisted personnel, minorities, recruited athletes, and women—to ascertain the extent to which these four groups of students were being admitted to the preparatory schools; at what rates these four groups of students graduated from the preparatory schools and accepted appointments to the academies; and how well these four groups fared at the academies in comparison with their nonpreparatory school peers. We also reviewed relevant studies on the preparatory schools conducted by internal and external sources. To evaluate the effectiveness of the preparatory schools in accomplishing their missions, we held discussions with senior service academy and preparatory school officials to determine what results they expected the preparatory schools to achieve, and we obtained their assessments of the schools’ effectiveness. We reviewed and analyzed aggregate preparatory school performance data for preparatory school academic years 1993 through 2002. We reviewed and analyzed the preparatory schools’ annual assessment reports, as well as other relevant data gathered from the academies and the preparatory schools. For class totals and for the four target groups of students at each of the preparatory schools, we analyzed the number and percentage of preparatory school students who entered and graduated from a preparatory school; the number and percentage of preparatory school graduates who accepted an appointment to an academy; the number and percentage of preparatory school graduates who accepted an appointment to an academy and then graduated from or are still attending an academy; and the number and percentage of the original preparatory school students who graduated from or are still attending an academy. We did not independently assess data reliability, but we obtained assurances about data completeness, accuracy, and reliability from academy officials responsible for maintaining data for each preparatory school. To assess the effectiveness of DOD oversight of the preparatory schools, we reviewed DOD guidance on oversight roles, responsibilities, and reporting requirements, as well as academy regulations and instructions, and discussed oversight activities with DOD, service, and service academy officials. Additionally, we reviewed criteria on the principles of effective management, such as those found in Internal Control Standards: Internal Control Management and Evaluation Tool. We conducted our review from February 2003 through July 2003 in accordance with generally accepted government auditing standards. Colorado Springs, Colorado (co- located with the U.S. Air Force Academy) Figure 7 shows the composition of each class of Air Force Academy Preparatory School enrollees over the past 10 years. Minorities are the largest target group at the school, averaging 48 percent of enrollment. The percentage of recruited athletes decreased from 1993 through 1996, and it has remained relatively constant since then at about 40 percent of enrollment. Enlisted personnel experienced the greatest change, constituting 12 percent of the student body in 1993, and peaking to 28 percent in 1996. Enlisted personnel averaged 18 percent of the enrolled class from 1993 through 2002. Since 1996 the percentage of enlisted personnel enrolled at the Military Academy Preparatory School has generally declined from a high of 54 percent in 1996 to a low of 25 percent in 2002. Concurrently, the enrollment of minorities has fluctuated between 29 and 49 percent. (See fig. 8.) The composition of each class of Naval Academy Preparatory School enrollees over the past 10 years is shown in figure 9. Minorities constituted the largest target group, averaging 44 percent from 1993 through 2002. Enlisted personnel made up, on average, 29 percent of the enrolled class, and recruited athletes made up, on average, 31 percent of the class. Figure 10 shows the percentage of all Air Force Academy Preparatory School students who graduated from or are still attending the Air Force Academy. From 1993 through 1998, academy graduation rates of Air Force Preparatory School students ranged from 43 percent to 53 percent. Figure 11 shows the same data for each of the four target groups. Figure 12 shows the percentage of all Army Preparatory School students who graduated from or are still attending the Military Academy. From 1993 through 1998, academy graduation rates of Army Preparatory School students ranged from 46 percent to 59 percent. Figure 13 shows the same data for each of the four target groups. Figure 14 shows the percentage of all Naval Academy Preparatory School students who graduated from or are still attending the Naval Academy. From 1993 through 1998, academy graduation rates of Naval Academy Preparatory School students ranged from 50 percent to 63 percent. Figure 15 shows the same data for each of the four target groups. Figure 16 shows the graduation rates for the Air Force Academy Preparatory School. In 2002, 79 percent of the students enrolled in the U.S. Air Force Preparatory School graduated from the preparatory school. The graduation rate remained relatively constant, averaging 78 percent from 1993 through 2002. Air Force preparatory school graduation rates by target group are shown in figure 17. Recruited athletes had the lowest graduation rates, averaging 67 percent over 10 years. Women and minorities had similar graduation rates over 10 years, both averaging 83 percent. Enlisted personnel had the highest graduation rate, averaging 85 percent over the past 10 years. Figure 18 shows the trend in Army preparatory school graduation rates over the past 10 years. In 2002, 77 percent of students in the U.S. Military Academy Preparatory School graduated from the school. The graduation rate increased during the past 10 years, from a low of 59 percent in 1993 to a high of 82 percent in 2000, before declining slightly in both 2001 and 2002. Figure 19 shows the Army preparatory school graduation rates, by target group, over the past 10 years. The rate for women increased—in fact doubled—from a low of 42 percent in 1993 to a high of 84 percent in 2001. On average, minorities graduated at a higher rate—73 percent—than did the other target groups from 1993 through 2002. Enlisted personnel had the lowest graduation rate among the four target groups, averaging 67 percent over 10 years. Figure 20 shows the trend in overall graduation rates at the Navy preparatory school for the past 10 years. Graduation rates at the school generally declined until 2000, reaching a low of 68 percent in that year. The graduation rate increased in the last 2 years, reaching 73 percent in 2002. Graduation rates averaged 75 percent over the 10 years. Figure 21 shows historical trends in Navy preparatory school graduation rates for target groups. Enlisted personnel had an average graduation rate of 83 percent, the highest among the target groups. Women and recruited athletes had lower graduation rates, both averaging 69 percent over 10 years. Graduation rates for minorities generally declined after peaking at 90 percent in 1994 and averaged 73 percent from 1993 to 2002. Figure 22 shows the percentage of Air Force preparatory school graduates who accepted appointments at the Air Force Academy. This percentage has remained relatively constant over the past 10 years. On average, 91 percent of the graduates accepted appointments to attend the Air Force Academy. Figure 23 shows the percentage of Air Force preparatory school students in the four target groups–enlisted personnel, minorities, recruited athletes, and women–who accepted an appointment to the Air Force Academy. All four groups had similar acceptance rates of appointments for admission. For the past 10 years, of those who graduated, an average of 91 percent of enlisted personnel, 92 percent of minorities, 93 percent of recruited athletes, and 90 percent of women accepted an appointment to attend the Air Force Academy. Figure 24 shows the rate at which U.S. Military Preparatory School students accepted appointments to attend the U.S. Military Academy. From 1993 through 2002, 97 percent of U.S. Military Academy Preparatory School graduates accepted appointments to attend the U.S. Military Academy. Figure 25 shows the rate at which Army preparatory school students in the target groups accepted appointments to attend the Military Academy. On average, almost all students in three target groups—minorities, recruited athletes, and women—accepted appointments into the U.S. Military Academy from 1993 through 2002. The acceptance rate for enlisted personnel decreased to 85 percent in 1999; however, it increased to 128 percent in 2002. Figure 26 shows the acceptance rate, by Navy preparatory school graduates, of appointments into the Naval Academy. Rates remained relatively constant over 10 years, falling to a low of 87 percent in 1998 and increasing to 100 percent in 1999. On average, 97 percent of the graduates accepted appointments to attend the U.S. Naval Academy. Figure 27 shows the rate at which Navy preparatory school students in the target groups accepted appointments to attend the Naval Academy. Women had the highest average acceptance rate among the four target groups, averaging 100 percent over 10 years. Although acceptance rates for enlisted personnel remained at or above 100 percent from 1999 through 2002, they had the lowest average acceptance rate, averaging 90 percent, over 10 years. On average, 99 percent of minorities and 95 percent of recruited athletes accepted nominations to attend the U.S. Naval Academy. Figure 28 shows a comparison between the Air Force Academy graduation rates of preparatory school graduates and those of students who accepted direct appointments to the academy. Academy graduation rates of Air Force Academy Preparatory School graduates from 1993 through 1998 were, on average, lower than those of direct appointees. Only in 1993 was the difference in graduation rates between preparatory school graduates and direct appointees greater than 10 percent. Figure 29 shows a comparison between the Military Academy graduation rates of preparatory school graduates and those of students who accepted direct appointments to the academy. Academy graduation rates of Military Academy Preparatory School graduates from 1993 through 1998 were, on average, lower than those of direct appointees. Figure 30 shows a comparison between the Naval Academy graduation rates of preparatory school graduates and those of students who accepted direct appointments to the academy. Academy graduation rates of Naval Academy Preparatory School graduates from 1993 through 1998 were, on average, lower than those of direct appointees. In addition to the name above, Daniel J. Byrne, Leslie M. Gregor, David F. Keefer, Tina M. Morgan, David E. Moser, Cheryl A. Weissman, and Susan K. Woodward made key contributions to this report. | Each year, the U.S. Air Force Academy, the U.S. Military Academy, and the U.S. Naval Academy combined spend tens of millions of dollars to operate preparatory schools that provide an alternative avenue for about 700 students annually to gain admission to the service academies. Service academy officials screen all applicants to identify those who they believe could succeed at the academies but who would benefit from more preparation. The Department of Defense (DOD) pays the full cost of providing this preparation. GAO was asked to review the three service academy preparatory schools, and this report specifically assesses (1) the adequacy of their current mission statements, (2) the effectiveness of these schools in accomplishing their missions, and (3) the effectiveness of DOD oversight of these schools. The three service academy preparatory schools' current mission statements do not clearly articulate the purpose for which the schools are being used by their respective service academies. In accordance with DOD guidance and the service academies' expectations, the preparatory schools give primary consideration for enrollment to enlisted personnel, minorities, women, and recruited athletes. However, the preparatory school mission statements are not clearly aligned with DOD guidance and the academies' expectations. This is a continuing problem, which GAO first reported in 1992. Without clear mission statements, the service academies and their respective preparatory schools cannot establish goals that fully reflect the preparatory schools' intended purpose. It is difficult to evaluate how effective the preparatory schools have been in accomplishing their missions because the service academies have not established performance goals for the preparatory schools. Without specific performance goals, there is no objective yardstick against which to gauge preparatory school effectiveness, as would be consistent with the principle of best practices for ensuring optimal return on investment. The effectiveness of DOD, military service, and service academy oversight is limited because the existing oversight framework for assessing preparatory school performance does not include performance goals and measures against which to objectively assess performance. DOD and the services receive annual reports from the academies on preparatory school performance. Without stated performance goals and measures, however, the reports do not offer DOD, the services, or the service academies as good an insight into the preparatory schools' performance and their return on investment as they could. |
In December 2007, the United States entered what has turned out to be its deepest recession since the end of World War II. Between the fourth quarter of 2007 and the third quarter of 2009, gross domestic product (GDP) fell by about 2.8 percent, or $377 billion. The unemployment rate rose from 4.9 percent in 2007 to 10.2 percent in October 2009, a level not seen since April 1983. The CBO projects that the unemployment rate will remain above 9 percent through 2011. Confronted with unprecedented weakness in the financial sector and the overall economy, the federal government and the Federal Reserve together acted to moderate the downturn and restore economic growth. The Federal Reserve used monetary policy to respond to the recession by pursuing one of the most significant interest rate reductions in U.S. history. In concert with the Department of the Treasury, it went on to bolster the supply of credit in the economy through measures that provide Federal Reserve backing for a wide variety of loan types, from mortgages to automobile loans to small business loans. The federal government also used fiscal policy to confront the effects of the recession. Existing fiscal stabilizers, such as unemployment insurance and progressive aspects of the tax code, kicked in automatically in order to ease the pressure on household income as economic conditions deteriorated. In addition, Congress enacted a temporary tax cut in the first half of 2008 to buoy incomes and spending and created the Troubled Asset Relief Program in the second half of 2008 to give Treasury authority to act to restore financial market functioning. The federal government’s largest response to the recession to date came in early 2009 with the passage of the Recovery Act, the broad purpose of which is to stimulate the economy’s overall demand for goods and services, or aggregate demand. The Recovery Act is specifically intended to preserve and create jobs and promote economic recovery; to assist those most impacted by the recession; to provide investments needed to increase economic efficiency by spurring technological advances in health and science; to invest in transportation, environmental protection, and other infrastructure that will provide long-term economic benefits; and to stabilize the budgets of state and local governments. The CBO estimates that the net cost of the Recovery Act will total approximately $787 billion from 2009 to 2019. The Recovery Act uses a combination of tax relief and government spending to accomplish its goals. The Recovery Act’s tax cuts include reductions to individuals’ taxes, payments to individuals in lieu of reductions to their taxes, adjustments to the Alternative Minimum Tax, and business tax incentives. Tax cuts encompass approximately one-third of the Recovery Act’s dollars. Recovery Act spending includes temporary increases in entitlement programs to aid people directly affected by the recession and provide some fiscal relief to states; this also accounts for about one third of the Recovery Act. For example, the Recovery Act temporarily increased and extended unemployment benefits, temporarily increased the rate at which the federal government matched states Medicaid expenditures, and provided additional funds for the Supplemental Nutrition Assistance and the Temporary Aid to Needy Families programs, among other things. Other spending, also accounting for about a third of the act falls into the category of grants, loans, and contracts. This includes government purchases of goods and services, grants to states through programs such as the State Fiscal Stabilization Fund for education and other government services, and government investment in infrastructure, health information technology, renewable energy research, and other areas. In interpreting recipient reporting data, it is important to recognize that the recipient reporting requirement only covers a defined subset of the Recovery Act’s funding. The reporting requirements apply only to nonfederal recipients of funding, including all entities receiving Recovery Act funds directly from the federal government such as state and local governments, private companies, educational institutions, nonprofits, and other private organizations. OMB guidance, consistent with the statutory language in the Recovery Act, states that these reporting requirements apply to recipients who receive funding through the Recovery Act’s discretionary appropriations, not recipients receiving funds through entitlement programs, such as Medicaid, or tax programs. Recipient reporting also does not apply to individuals. In addition, the required reports cover only direct jobs created or retained as a result of Recovery Act funding; they do not include the employment impact on materials suppliers (indirect jobs) or on the local community (induced jobs). Figure 1 shows the division of total Recovery Act funds and their potential employment effects. Tracing the effects of the Recovery Act through the economy is a complicated task. Prospectively, before the act’s passage or before funds are spent, the effects can only be projected using economic models that represent the behavior of governments, firms, and households. While funds are being spent, some effects can be observed but often relevant data on key relationships and indicators in the economy are available only with a lag, thereby complicating real-time assessments. When a full range of data on outcomes becomes available, economic analysts undertake retrospective analyses, where the findings are often used to guide future policy choices and to anticipate effects of similar future policies. Stimulus spending under the broad scope of the Recovery Act will reverberate at the national, regional, state, and local levels. Models of the national economy provide the most comprehensive view of policy effects, but they do not provide insight, except indirectly, about events at smaller geographical scales. The diversity and complexity of the components of the national economy are not fully captured by any set of existing economic models. Some perspective can be gained by contemporaneous close observation of the actions of governments, firms, and households, but a complete and accurate picture of the Recovery Act’s impact will emerge only slowly. Section 1512 of the Recovery Act requires recipients of recovery funds to report on those funds each calendar quarter. These recipient reports are to be filed for any quarter in which a recipient receives Recovery Act funds directly from the federal government. The recipient reporting requirement covers all funds made available by appropriations in division A of the Recovery Act. The reports are to be submitted no later than 10 days after the end of each calendar quarter in which the recipient received Recovery Act funds. Each report is to include the total amount of Recovery Act funds received, the amount of funds expended or obligated to projects or activities, and a detailed list of those projects or activities. For each project or activity, the detailed list must include its name and a description, an evaluation of its completion status, and an estimate of the number of jobs created or the number of jobs retained by that project or activity. Certain additional information is also required for infrastructure investments made by state and local governments. Also, the recipient reports must include detailed information on any subcontracts or subgrants as required by the Federal Funding Accountability and Transparency Act of 2006. Section 1512(e) of the Recovery Act requires GAO and CBO to comment on the estimates of jobs created or retained reported by recipients. In its guidance to recipients for estimating employment effects, OMB instructed recipients to report only the direct employment effects as “jobs created or retained” as a single number. Recipients are not expected to report on the employment impact on materials suppliers (indirect jobs) or on the local community (induced jobs). According to the guidance, “A job created is a new position created and filled or an existing unfilled position that is filled as a result of the Recovery Act; a job retained is an existing position that would not have been continued to be filled were it not for Recovery Act funding. Only compensated employment . . . should be reported. The estimate of the number of jobs . . . should be expressed as ‘full-time equivalents (FTE),’ which is calculated as total hours worked in jobs created or retained divided by the number of hours in a full-time schedule, as defined by the recipient.” Consequently, the recipients are expected to report the amount of labor hired or not fired as result of having received Recovery Act funds. It should be noted that one FTE does not necessarily equate to the job of one person. Firms may choose to increase the hours of existing employees, for example, which can certainly be said to increase employment but not necessarily be an additional job in the sense of adding a person to the payroll. To implement the recipient reporting data requirements, OMB has worked with the Recovery Accountability and Transparency Board (Recovery Board) to deploy a nationwide data collection system at www.federalreporting.gov (Federalreporting.gov), while the data reported by recipients are available to the public for viewing and downloading on www.recovery.gov (Recovery.gov). Recovery.gov, a site designed to provide transparency of information related to spending on Recovery Act programs, is the official source of information related to the Recovery Act. The Recovery Board’s goals for the Recovery Act Web site include promoting accountability by providing a platform to analyze Recovery Act data and serving as a means of tracking fraud, waste, and abuse allegations by providing the public with accurate, user-friendly information. In addition, the site promotes official data in public debate, assists in providing fair and open access to Recovery Act opportunities, and promotes an understanding of the local impact of Recovery Act funding. In an effort to address the level of risk in recipient reporting, OMB’s June 22, 2009, guidance on recipient reporting includes a requirement for data quality reviews. OMB’s data quality guidance is intended to address two key data problems—material omissions and significant reporting errors. Material omissions and significant reporting errors are risks that the information is incomplete and inaccurate. As shown in figure 2, OMB gave specific time frames for reporting that allow prime recipients and delegated subrecipients to prepare and enter their information on days 1 through 10 following the end of the quarter. During days 11 through 21, prime recipients will be able to review the data to ensure that complete and accurate reporting information is provided prior to a federal agency review and comment period beginning on the 22nd day. During days 22 to 29 following the end of the quarter, federal agencies will perform data quality reviews and will notify the recipients and delegated subrecipients of any data anomalies or questions. The original submitter must complete data corrections no later than the 29th day following the end of the quarter. Prime recipients have the ultimate responsibility for data quality checks and the final submission of the data. Since this is a cumulative reporting process, additional corrections can take place on a quarterly basis. OMB guidance does not explicitly mandate a methodology for conducting data quality reviews at the prime and delegated subrecipient level or by the federal agencies. Instead, the June 22, 2009, guidance provides the relevant party conducting the data quality review with discretion in determining the optimal method for detecting and correcting material omissions or significant reporting errors. The guidance says that, at a minimum, federal agencies, recipients, and subrecipients should establish internal controls to ensure data quality, completeness, accuracy, and timely reporting of all amounts funded by the Recovery Act. The Recovery Board published the results of the first round of recipient reporting on Recovery.gov on October 30, 2009. According to the Web site, recipients submitted 130,362 reports indicating that 640,329 “jobs” were created or saved as a direct result of the Recovery Act. These data solely reflect the direct FTEs reported by recipients of Recovery Act grants, contracts, and loans for the period beginning when the act was signed into law on February 17, 2009 through September 30, 2009. As shown in figure 3, grants, contracts, and loans account for about 27 percent, or $47 billion, of the approximately $173 billion in Recovery Act funds paid out as of September 30, 2009. Recipients in all 50 states reported jobs created or retained with Recovery Act funding provided through a wide range of federal programs and agencies. Table 1 shows the distribution of jobs created or retained across the nation as reported by recipients on Recovery.gov. Not surprisingly, California, the most populous state, received the most Recovery Act dollars and accounted for the largest number of the reported jobs created or retained. Table 2 shows the number and share of jobs created or retained by federal program agencies as reported by recipients of Recovery Act funding. The Department of Education accounted for nearly 400,000 or close to two- thirds of the reported jobs created or retained. According to the Department of Education, this represents about 325,000 education jobs such as teachers, principals, and support staff in elementary and secondary schools, and educational, administrative, and support personnel in institutions of higher education funded primarily through the State Fiscal Stabilization Fund (SFSF). In addition, approximately 73,000 other jobs (including both education and noneducation positions) were reported saved or created from the SFSF Government Services Fund, the Federal Work Study Program, and Impact Aid funds. While recipients GAO contacted appear to have made good faith efforts to ensure complete and accurate reporting, GAO’s fieldwork and initial review and analysis of recipient data from www.recovery.gov, indicate that there are a range of significant reporting and quality issues that need to be addressed. Collecting information from such a large and varied number of entities in a compressed time frame, as required by the Recovery Act, is a huge task. Major challenges associated with the new Recovery Act reporting requirements included educating recipients about the reporting requirements and developing the systems and infrastructure for collecting and reporting the required information. While recipients in the states we reviewed generally made good faith efforts to report accurately, there is evidence, including numerous media accounts, that the data reporting has been somewhat inconsistent. Even recipients of similar types of funds appear to have interpreted the reporting guidance in somewhat different ways and took different approaches in how they developed their jobs data. The extent to which these reporting issues affect overall data quality is uncertain at this point. As existing recipients become more familiar with the reporting system and requirements, these issues may become less significant although communication and training efforts will need to be maintained and in some cases expanded as new recipients of Recovery Act funding enter the system. Because this effort will be an ongoing process of cumulative reporting, our first review represents a snapshot in time. We performed an initial set of edit checks and basic analyses on the recipient report data available for download from Recovery.gov on October 30, 2009. Based on that initial review work, we identified recipient report records that showed certain data values or patterns in the data that were either erroneous or merit further review due to an unexpected or atypical data value or relationship between data values. For the most part, the number of records identified by our edit checks was relatively small compared to the 56,986 prime recipient report records included in our review. As part of our review, we examined the relationship between recipient reports showing the presence or absence of any FTE counts with the presence or absence of funding amounts shown in either or both data fields for amount of Recovery Act funds received and amount of Recovery Act funds expended. Forty four percent of the prime recipient reports showed an FTE value. As shown in table 3, we identified 3,978 prime recipient reports where FTEs were reported but no dollar amount was reported in the data fields for amount of Recovery Act funds received and amount of Recovery Act funds expended. These records account for 58,386 of the total 640,329 FTEs reported. As might be expected, 71 percent of those prime recipient reports shown in table 3 that did not show any FTEs also showed no dollar amount in the data fields for amount of Recovery Act funds received and amount expended. There were also 9,247 reports that showed no FTEs but did show some funding amount in either or both of the funds received or expended data fields. The total value of funds reported in the expenditure field on these reports was $965 million. Those recipient reports showing FTEs but no funds and funds but no FTEs constitute a set of records that merit closer examination to understand the basis for these patterns of reporting. Ten recipient reports accounted for close to 30 percent of the total FTEs reported. All 10 reports were grants and the majority of those reports described funding support for education-sector related positions. For reports containing FTEs, we performed a limited, automated scan of the job creation field of the report, which is to contain a narrative description of jobs created or retained. We identified 261 records where there was only a brief description in this job creation field and that brief text showed such words or phrases as “none,” “N/A,” zero, or variants thereof. For most of these records, the value of FTEs reported is small, but there are 10 of these records with each reporting 50 or more FTEs. The total number of FTEs reported for all 261 records is 1,776. While our scan could only identify limited instances of apparently contradictory information between the job description and the presence of an FTE number, we suspect that a closer and more extensive review of the job description field in relation to the count of FTEs would yield additional instances where there were problems, and greater attention to this relationship would improve data quality. In our other analyses of the data fields showing Recovery Act funds, we identified 132 records where the award amount was zero or less than $10. There were also 133 records where the amount reported as received exceeded the reported award amount by more than $10. On 17 of these records, the difference between the smaller amount awarded and the larger reported amount received exceeded $1 million. While there may be a reason for this particular relationship between the reported award amount and amount received, it may also indicate an improper keying of data or an interpretation of what amounts are to be reported in which fields that is not in accordance with the guidance. We calculated the overall sum and sum by states for number of FTEs reported, award amount, and amount received. We found that they corresponded closely with the values shown for these data on Recovery.gov. Some of the data fields we examined with known values such as the Treasury Account Symbol (TAS) codes and Catalog of Federal Domestic Assistance (CFDA) numbers showed no invalid values on recipient reports. However, our analyses show that there is reason to be concerned that the values shown for these data fields in conjunction with the data field identifying who the funding or awarding agency is may not be congruent. Both TAS and CFDA values are linked to specific agencies and their programs. We matched the reported agency codes against the reported TAS and CFDA codes. We identified 454 reports as having a mismatch on the CFDA number—therefore, the CFDA number shown on the report did not match the CFDA number associated with either the funding or awarding agency shown on the report. On TAS codes, we identified 595 reports where there was no TAS match. Included in the mismatches were 76 recipient reports where GAO was erroneously identified as either the funding or awarding agency. In many instances, review of these records and their TAS or CFDA values along with other descriptive information from the recipient report indicated the likely funding or awarding agencies. These mismatches suggest that either the identification of the agency or the TAS and CFDA codes are in error on the recipient report. Another potential problem area we identified was the provision of data on the number and total amount of small subawards of less than $25,000. There are data fields that collect information on small subawards, small subawards to individuals, and small subawards to vendors. There were 380 prime recipient report records where we observed the same values being reported in both small subawards and small subawards to individuals. We also identified 1,772 other records where it could be clearly established that these values were being reported separately. While we are able to establish that these data are not being consistently reported, it is not possible to assess from the data alone the full extent to which subaward data are being combined or reported separately across all recipient reports. Additionally, we noted 152 reports where, in either the subawards or subawards to individuals data fields, the value for the number of subawards and the total dollar value of subawards were exactly the same and, as such, most likely erroneous. While most recipient report records were not identified as potential problems in these initial edit checks and analyses thus far, our results do indicate the need for further data quality efforts. Under OMB guidance, jobs created or retained were to be expressed as FTEs. We found that data were reported inconsistently even though significant guidance and training was provided by OMB and federal agencies. While FTEs should allow for the aggregation of different types of jobs—part-time, full-time or temporary—differing interpretations of the FTE guidance compromise the ability to aggregate the data. In addition to issuing guidance, OMB and federal agencies provided several types of clarifying information to recipients as well as opportunities to interact and ask questions or receive help with the reporting process. These included weekly phone calls between OMB and groups representing the state budget and comptrollers offices, weekly calls between all state reporting leads, webinars, a call center, and e-mail outreach. State officials reported they took advantage of and appreciated this outreach. For example, Ohio state officials said they were generally satisfied with the technical assistance and guidance provided by OMB— specifically, the assistance it received from the Federalreporting.gov help desk staff. OMB estimated that it had a better than 90 percent response rate for recipient reporting and said that they answered over 3,500 questions related to recipient reporting. The data element on jobs created or retained expressed in FTEs raised questions and concerns for some recipients. OMB staff reported that questions on FTEs dominated the types of questions they fielded during the first round of recipient reporting. Although the recipient reports provide a detailed account of individual projects, as Recovery.gov shows, these projects represent different types of activities and start and end at various points throughout the year, and recipients had various understandings of how to report an FTE. In section 5.2 of the June 22 guidance, OMB states that “the estimate of the number of jobs required by the Recovery Act should be expressed as ‘full-time equivalents’ (FTE), which is calculated as the total hours worked in jobs retained divided by the number of hours in a full time schedule, as defined by the recipient.” Further, “the FTE estimates must be reported cumulatively each calendar quarter.” In section 5.3, OMB states that “reporting is cumulative across the project lifecycle, and will not reset at the beginning of each calendar or fiscal year.” FTE calculations varied depending on the period of performance the recipient reported on. For example, in the case of federal highways projects, some have been ongoing for six months, while others started in September 2009. In attempting to address the unique nature of each project, DOT’s Federal Highway Administration (FHWA) faced the issue of whether to report FTE data based on the length of time to complete the entire project (project period of performance) versus a standard period of performance such as a calendar quarter across all projects. According to FHWA guidance, which was permitted by OMB, FTEs reported for each highway project are expressed as an average monthly FTE. This means that for a project that started on July 1, 2009, the prime recipient would add up the hours worked on that project in the months of July, August, and September and divide that number by . For a project that started on August 1, 2009, the prime recipient should add up the hours worked on that project in the months of August and September and divide that number by . For a project that started on September, 1, 2009, the prime recipient should add up the hours worked on that project in the month of September and divide that number by . The issue of a standard performance period is magnified when looking across programs and across states. To consistently compare FTEs, or any type of fraction, across projects, one must use a common denominator. Comparison of FTE calculations across projects poses challenges when the projects have used different time periods as denominators. Tables 4 and 5 below provide more detail on the problems created by not having a standard performance period for calculating FTEs. Table 4 is an application of the FHWA guidance for three projects with varying start dates. This example illustrates the way FHWA applied the OMB guidance and that the way FTEs are aggregated in Federalreporting.gov could overstate the employment effects. In this example, because the 30 monthly FTE data were aggregated without standardizing for the quarter, FTEs would be overstated by 10 relative to the OMB guidance. A standardized quarterly measure and job-years are included as examples of a standard period of performance. A job-year is simply one job for 1 year. Regardless of when the project begins, the total hours worked is divided by a full years worth of time (12 months), which would enable aggregation of employment effects across programs and time. Table 5 is an application of the OMB guidance for two projects with varying start dates. In this example, the OMB guidance understates the employment effect relative to the standardized measure. Cumulative FTE per OMB guidance would result in 20 FTE compared with 30 FTE when standardized on a quarterly basis. Both a standardized quarterly FTE measure and a job-year measure are included as examples of a standard period of performance. Regardless of when the project begins, the total hours worked is divided by a full year’s worth of time (12 months), which would enable aggregation of employment effects across programs and time. There are examples from other DOT programs where the issue of a project period of performance created significant variation in the FTE calculation. For example, in Pennsylvania, each of four transit entities we interviewed used a different denominator to calculate the number of full-time equivalent jobs they reported on their recipients reports for the period ending September 30, 2009. Southeastern Pennsylvania Transportation Authority in Philadelphia used 1,040 hours as its denominator, since it had projects underway in two previous quarters. Port Authority of Allegheny County prorated the hours based on the contractors’ start date as well as to reflect that hours worked from September were not included due to lag time in invoice processing. Port Authority used 1,127 hours for contractors starting before April, 867 hours for contractors starting in the second quarter, and 347 hours for contractors starting in the third quarter. Lehigh and Northampton Transportation Authority in Allentown used 40 hours in the 1512 report they tried to submit, but, due to some confusion about the need for corrective action, the report was not filed. Finally, the Pennsylvania Department of Transportation in the report for nonurbanized transit systems used 1,248 hours, which was prorated by multiplying 8 hours per workday times the 156 workdays between February 17 and September 30, 2009. In several other of our selected states, this variation across transit programs’ period of performance for the FTE calculation also occurred. The issue of variation in the period of performance used to calculate FTEs also occurred in Education programs. Across a number of states we reviewed, local education agencies and higher education institutions used a different denominator to calculate the number of FTEs they reported on their recipient reports for the period ending September 30, 2009. For example, two higher education systems in California each calculated the FTE differently. In the case of one, officials chose to use a two-month period as the basis for the FTE performance period. The other chose to use a year as the basis of the FTE. The result is almost a three-to-one difference in the number of FTEs reported for each university system in the first reporting period. Although Education provides alternative methods for calculating an FTE, in neither case does the guidance explicitly state the period of performance of an FTE. Recipients were also confused about counting a job created or retained even though they knew the number of hours worked that were paid for with Recovery Act funds. For example, the Revere Housing Authority, in administering one Recovery Act project, told us that they may have underreported jobs data from an architectural firm providing design services for a Recovery Act window replacement project at a public housing complex. The employees at the architecture firm that designed the window replacement project were employed before the firm received the Recovery Act funded contract and will continue to be employed after the contract has been completed, so from the Revere Housing Authority’s perspective there were no jobs created or retained. As another example, officials from one housing agency reported the number of people, by trade, who worked on Recovery Act related projects, but did not apply the full- time equivalent calculation outlined by OMB in the June 22 reporting guidance. Officials from another public housing agency told us that they based the number of jobs they reported on letters from their contractors detailing the number of positions rather than FTEs. OMB staff said that thinking about the jobs created or retained as hours worked and paid for with Recovery Act funds was a useful way to understand the FTE guidance. While OMB’s guidance explains that in applying the FTE calculation for measuring the number of jobs created or retained recipients will need the total number of hours worked that are funded by the Recovery Act, it could emphasize this relationship more thoroughly throughout its guidance. OMB’s decision to convert jobs into FTEs provides a consistent lens to view the amount of labor being funded by the Recovery Act, provided each recipient uses a standard time frame in considering the FTE. The current OMB guidance, however, creates a situation where, because there is no standard starting or ending point, an FTE provides an estimate for the life of the project. Without normalizing the FTE, aggregate numbers should not be considered, and the issue of a standard period of performance is magnified when looking across programs and across states. Recipients we interviewed were able to report into and review data on Federalreporting.gov. Particularly given the scale of the project and how quickly it was implemented, within several months, the ability of the reporting mechanisms to handle the volume of data from the range of recipients represents a solid first step in the data collection and reporting process for the fulfillment of the section 1512 mandate. Nonetheless, there were issues associated with the functional process of reporting. For example, state officials with decentralized reporting structures reported problems downloading submitted information from Recovery.gov to review top-line figures such as money spent and jobs created or retained. The Iowa Department of Management, which did Iowa’s centralized reporting into Federalreporting.gov, said that, overall, the system was very slow. In addition to the slowness, as the system was processing input from Iowa’s submission, every time it encountered an error, it kicked back the whole submission—but it showed only the one error. After fixing the one errant entry, the state resubmitted its information, which would then be completely sent back the next time an error was encountered. Iowa officials believe it would have been more efficient if the system identified all errors in submission and sent back a complete list of errors to fix. Other recipient reporters we interviewed highlighted issues around DUNS numbers and other key identifiers, along with the inability to enter more than one congressional district for projects that span multiple districts. The expectation is that many of these entry and processing errors were captured through the review process, but the probability that all errors were caught is low. Generally, state officials from our 17 jurisdictions reported being able to work through technical reporting and processing glitches. For example, Florida officials reported that they encountered many technical issues but were able to solve the problems by contacting the Recovery Board. Ohio officials noted that, although they were initially concerned, in spite of the tremendous amount of data being submitted, Federalreporting.gov held up well. While they faced some challenges, California officials reported that, overall, they were successful in reporting the numbers into Federalreporting.gov. They worked with the technical team at Federalreporting.gov and performed a test on October 1, 2009, to see if the upload of the job data was going to work. During the October reporting time frame, New Jersey officials reported that they generally did not experience significant recipient reporting problems. The few reporting problems New Jersey experienced occurred in relation to issues uploading the data onto Federalreporting.gov and issues requiring clarifying guidance from the relevant federal agency. Notwithstanding the concerns over the slowness of the reporting system and error checks, Iowa officials also reported that the process worked rather well, determining that most of their state reporting problems seemed to stem from a few recipients not fully grasping all of the training the state had provided and thus not knowing or having key information like DUNS numbers and in some cases submitting erroneous information. The state department of management plans to specifically address the 30 or so recipients associated with these issues—just about all of which were school districts. As a follow-up from this first reporting cycle, several states have developed a list of lessons learned to share with OMB and other federal agencies. An example in appendix I illustrates problems public housing authorities had with both the recipient reporting processing functions and the FTE calculation. In addition to the Federalreporting.gov Web site, the Recovery Board used a revised Recovery.gov Web site to display reported data. The revised site includes the ability to search spending data by state, ZIP code, or congressional district and display the results on a map. The Recovery Board also awarded a separate contract to support its oversight responsibilities with the ability to analyze reported data and identify areas of concern for further investigation. In addition, the board plans to enhance the capabilities of Federalreporting.gov. However, the Recovery Board does not yet use an adequate change management process to manage system modifications. Without such a process, the planned enhancements could become cost and schedule prohibitive. The board has recognized this as a significant risk and has begun development of a change management process. Finally, the board has recognized the need to improve the efficiency of its help desk operation to avoid dropped calls and is working on agreements to address this risk. Recipient reporting data quality is a shared responsibility, but often state agencies have principal accountability because they are the prime recipients. Prime recipients, as owners of the recipient reporting data, have the principal responsibility for the quality of the data submitted, and subrecipients delegated to report on behalf of prime recipients share in this responsibility. In addition, federal agencies funding Recovery Act projects and activities provide a layer of oversight that augments recipient data quality. Oversight authorities including OMB, the Recovery Board, and federal agency IGs also have roles to play in ensuring recipient reported data quality, while the general public and nongovernmental entities can help as well by highlighting data problems for correction. All of the jurisdictions we reviewed had data quality checks in place for the recipient reporting data, either at the state level or a state agency level. State agencies, as entities that receive Recovery Act funding as federal awards in the form of grants, loans, or cooperative agreements directly from the federal government, are often the prime recipients of Recovery Act funding. Our work in the 16 states and the District of Columbia showed differences in the way states as prime recipients approach recipient reporting data quality review. Officials from nine states reported having chosen a centralized reporting approach meaning that state agencies submit their recipient reports to a state central office, which then submits state agency recipient reports to Federalreporting.gov. For example, Colorado’s Department of Transportation provided its recipient report to a central entity, the Colorado Office of Information Technology, for submission to Federalreporting.gov. States with centralized reporting systems maintain that they will be able to provide more oversight of recipient reporting with this approach. Advocates of centralized reporting also expect that method will increase data quality, decrease omissions and duplicate reporting, and facilitate data cleanup. Officials from the remaining eight jurisdictions reported using a decentralized reporting system. In these cases, the state program office administering the funds is the entity submitting the recipient report. In Georgia, for example, the State Department of Transportation is responsible for both reviewing recipient report data and submitting it to Federalreporting.gov. Illinois, as is the case for four other decentralized states, is quasidecentralized where the data are centrally reviewed and reported in a decentralized manner. When the audit office informs the Office of the Governor that its review is complete and if the Office of the Governor is satisfied with the results, the Illinois state reporting agency may upload agency data to Federalreporting.gov. Appendix I provides details on California’s recipient reporting experiences. As a centralized reporting state, Iowa officials told us that they developed internal controls to help ensure that the data submitted to OMB, other federal entities, and the general public, as required by section 1512 of the Recovery Act, are accurate. Specifically, Iowa inserted validation processes in its Recovery Act database to help reviewers identify and correct inaccurate data. In addition, state agency and local officials were required to certify their review and approval of their agency’s information prior to submission. Iowa state officials told us that they are working on data quality plans to include being able to reconcile financial information with the state’s centralized accounting system. According to Iowa officials, the number of Recovery Act grant awards improperly submitted was relatively small. As a decentralized reporting state, New Jersey officials reported that a tiered approach to data quality checks was used for all Recovery Act funding streams managed by the state. Each New Jersey state department or entity was responsible for formulating a strategy for data quality reviews and implementing that strategy. The New Jersey Department of Community Affairs, for example, directed subrecipients to report data directly into an existing departmental data collection tool modified to encompass all of the data points required by the Recovery Act. This system gave the Department of Community Affairs the ability to view the data as it came in from each subrecipient. From this data collection tool, the department uploaded prime and subrecipient data to Federalreporting.gov. All departmental strategies were reviewed by the New Jersey Governor’s office and the New Jersey Recovery Accountability Task Force. The Governor’s office conducted a review of the reports as they were uploaded to Federalreporting.gov on a program-by-program, department-by-department basis to identify any outliers, material omissions, or reporting errors that could have been overlooked by departments. To help ensure the quality of recipient report data, the Recovery Board encouraged each federal Office of Inspector General overseeing an agency receiving Recovery Act funds to participate in a governmentwide Recovery Act Reporting Data Quality Review. The Recovery Board requested the IG community to determine the following: (1) the existence of documentation on the agencies’ processes and procedures to perform limited data quality reviews targeted at identifying material omissions and significant reporting errors, (2) the agencies’ plans for ensuring prime recipients report quarterly, and (3) how the agencies intend to notify the recipient of the need to make appropriate and timely changes. In addition, IGs reviewed whether the agency had an adequate process in place to remediate systemic or chronic reporting problems and if they planned to use the reported information as a performance management and assessment tool. We reviewed the 15 IG reports that were available as of November 12, 2009. Our review of these reports from a range of federal agencies found that they had drafted plans or preliminary objectives for their plans for data quality procedures. Published IG audits on agencies’ Recovery Act data quality reviews that we examined indicated that federal agencies were using a variety of data quality checks, which included automated or manual data quality checks or a combination. Computer programs drive the automated processes by capturing records that do not align with particular indicators determined by the agency. Agencies may use a manual process where a designated office will investigate outliers that surface during the automated test. For example, the automated process for Education performs data checks to validate selected elements against data in the department’s financial systems. As part of its data quality review, Education officials are to examine submitted reports against specific grant programs or contract criteria to identify outliers for particular data elements. Of the IG reports that we reviewed that mentioned systemic or chronic problems, 9 of the 11 found that their agencies had a process in place to address these problems. Although some of the IGs were unable to test the implementation of their agency’s procedures for reviewing the quarterly recipient reports, based on their initial audit, they were able to conclude that the draft plan or preliminary objectives for data quality review were in place. According to OMB’s guidance documents, federal agencies must work with their recipients to ensure comprehensive and accurate recipient reporting data. A September 11, 2009, memorandum from OMB directed federal agencies to identify Recovery Act award recipients for each Recovery Act program they administer and conduct outreach actions to raise awareness of registration requirements, identify actual and potential barriers to timely registration and reporting, and provide programmatic knowledge and expertise that the recipient may need to register and enter data into Federalreporting.gov. Federal agencies were also expected to provide resources to assist state and select local governments in meeting reporting requirements required by the Recovery Act. In addition, federal agencies were to identify key mitigation steps to take to minimize delays in recipient registration and reporting. OMB also requires that federal agencies perform limited data quality reviews of recipient data to identify material omissions and significant reporting errors and notify the recipients of the need to make appropriate and timely changes to erroneous reports. Federal agencies are also to coordinate how to apply the definitions of material omissions and significant reporting errors in given program areas or across programs in a given agency to ensure consistency in the manner in which data quality reviews are carried out. Although prime recipients and federal agency reviewers are required to perform data quality checks, none are required to certify or approve data for publication. However, as part of their data quality review, federal agencies must classify the submitted data as not reviewed by the agency; reviewed by the agency with no material omissions or significant reporting errors identified; or reviewed by the agency with material omissions or significant reporting errors identified. If an agency fails to choose one of the aforementioned categories, the system will default to not reviewed by the agency. The prime recipient report records we analyzed included data on whether the prime recipient and the agency reviewed the record in the OMB data quality review time frames. In addition, the report record data included a flag as to whether a correction was initiated. A correction could be initiated by either the prime recipient or the reviewing agency. Table 6 shows the number and percentage of prime recipient records that were marked as having been reviewed by either or both parties and whether a correction was initiated. OMB’s guidance provided that, a federal agency, depending on the review approach and methodology, could classify data as being reviewed by the agency even if a separate and unique review of each submitted record had not occurred. As shown in table 6, more than three quarters of the prime recipient reports were marked as having undergone agency review. Less than one percent was marked as having undergone review by the prime recipient. The small percentage reviewed by the prime recipients themselves during the OMB review time frame warrants further examination. While it may be the case that the recipients’ data quality review efforts prior to initial submission of their reports precluded further revision during the review time frame, it may also be indicative of problems with the process of noting and recording when and how the prime recipient reviews occur and the setting of the review flag. Overall, slightly more than a quarter of the reports were marked as having undergone a correction during the OMB review time frames. The Federal-Aid Highway Program provided a good case study of federal agency data quality reviews because the responsible federal agency, FHWA, had previous experience estimating and reporting on the employment effects of investment in highway construction. As a result, FHWA would seem to be better positioned than some other federal agencies to fulfill the job creation or retention reporting requirements under the Recovery Act and may have data quality review processes that other federal agencies could replicate. We met with officials and reviewed available documentation including federal highway reporting documents and payroll records at the selected state departments of transportation and selected vendors. Overall, we found that the state departments of transportation as prime recipients had in place plans and procedures to review and ensure data quality. We followed up with the state departments of transportation to confirm that these procedures were followed for highway projects representing at least 50 percent of the Recovery Act highway reimbursements as of September 4, 2009 in the 17 jurisdictions where we are conducting bimonthly reviews and reviewed available documentation. Appendix I illustrates recipient reporting processes and data quality checks at the Florida Department of Transportation. In addition to the section 1512 reporting requirements, recipients of certain transportation Recovery Act funds, such as state departments of transportation, are subject to the reporting requirements outlined in section 1201(c) of the Recovery Act. Under section 1201(c), recipients of transportation funds must submit periodic reports on the amount of federal funds appropriated, allocated, obligated, and reimbursed; the number of projects put out to bid, awarded, or work has begun or completed; and the number of direct and indirect jobs created or sustained, among other things. The Recovery Act section 1201(c) requirement called for project level data to be reported twice before the first Recovery Act section 1512 report was due. DOT is required to collect and compile this information for Congress, and it issued its first report to Congress in May 2009. Consequently, DOT and its modal administrations, such as FHWA, and state departments of transportation gained experience collecting and reporting job creation and retention information before the first Recovery Act section 1512 report was due in October 2009 and required FHWA to have its data collection and review process in place in advance of October 1, 2009, the start of the section 1512 reporting. To help fulfill these reporting requirements, FHWA implemented a reporting structure that ties together the federal and state levels of reporting, creating both a chain of evidence and redundancy in the review of the reported data. Figure 4 shows the reporting structure. As part of t reporting structure, FHWA also created the Recovery Act Data System (RADS), with the updated version of the system released in early September 2009. RADS is primarily designed as a repository of data for states, but it also serves as an important oversight tool for FHWA becaus it links federal financial data to project data reported by the states. The system helps ensure consistent definitions of fields and enables FHWA to auto-populate identification fields, including DUNS numbers, award numbers, and total award amounts, to both reduce the burden at the project level and to reduce the data entry errors. In addition, monthly reporting requirements include payroll records, hours worked, and data quality assurances, in individual contracts for highway projects funded with Recovery Act funds. FHWA may withhold payments if a recipient is found to be in noncompliance with the reporting requirements. e e with the reporting requirements. Appendi g examples for contractors in Georgia and FHWA has taken several steps to help ensure the reliability of the information contained in RADS. First, FHWA compared information states recorded in RADS to the information states submitted to Federalreporting.gov to identify inconsistencies or discrepancies. Second, as part of an ongoing data reliability process, FHWA monitors select fields in RADS, such as number of projects, types of projects, and where projec are located, and performs data validation and reasonableness tests. For example, it checks if a rate of payment in dollars per hour is too high or too low. When potential issues are identified, FHWA division offices work with the state department of transportation or central office to make necessary changes. ts For this round of recipient reporting, FHWA used an automated process to review all of the reports filed by recipients. These automated reviews included various data validation and reasonableness checks. For example, FHWA checked whether the range of FTEs reported were within its own economic estimates. For any reports that were out of range, FHWA would comment on these reports. As described earlier, only recipients could make changes to the data. In making a comment, FHWA let the recipient know there was potential concern with the record. The recipient then had the opportunity to either change or explain the comment raised by FHWA. According to FHWA officials, they reviewed 100 percent of more than 7,000 reports submitted by recipients of Recovery Act highway funds and found that the final submissions were generally consistent with department data. Although there were problems of inconsistent interpretation of the guidance, the reporting process went well for highway projects. Education has engaged in numerous efforts to facilitate jobs reporting by states and local educational agencies (LEA). States and LEAs have also taken action to collect and report jobs data and to ensure data quality. Despite these efforts, state and local officials we spoke with raised some concerns about the quality of jobs data reported in October 2009, such as insufficient time to incorporate updated guidance on estimating job counts. To address these concerns, Education and many state officials we interviewed said they plan to take steps to improve the reporting and data quality processes before the next reports are due in January 2010. Our review focused on the State Fiscal Stabilization Fund, as well as Recovery Act grants made for the Elementary and Secondary Education Act of 1965, Title I, Part A and for the Individuals with Disabilities Education Act, Part B. To collect this information, we interviewed Education officials and officials in 10 states—Arizona, California, Colorado, Florida, Georgia, Illinois, Massachusetts, New Jersey, New York, and North Carolina—the District of Columbia, and 12 LEAs, including a mix of LEAs in urban and rural areas. States were selected from the 16 states and the District of Columbia in which we conduct bi-monthly reviews of the use of Recovery Act funds as mandated by the Recovery Act. We also reviewed federal and state guidance and other documentation. Education’s efforts to facilitate jobs reporting by states and LEAs include coordinating with OMB, providing guidance and technical assistance to states and LEAs, and reviewing the quality of the jobs data reported. Education has coordinated its efforts regarding recipient reporting with OMB in a number of ways, including participating in cross-agency workgroups and clearing its guidance materials with OMB prior to disseminating them. On August 10, 2009, Education hosted a web-based technical assistance conference on reporting requirements that included information on OMB’s guidance on estimating and reporting jobs data. On September 11, the department issued guidance specifically related to estimating and reporting jobs created or retained by states and LEAs receiving Recovery Act grants. Education updated its jobs guidance and hosted another web-based technical assistance conference on September 21, providing detailed instructions to states and LEAs on a range of topics, such as how to estimate the number of hours created or retained for a teacher who works less than 12 months in a year. In addition, according to Education officials, the department developed and implemented a draft plan to review the jobs data that states and LEAs reported to Federalreporting.gov in October. This plan addresses the roles and responsibilities of several Education offices to assist with the data quality review throughout the 30-day reporting timeline (for example, Oct. 1 through Oct. 30, 2009). According to the plan, these responsibilities include continuous evaluation of recipient and subrecipient efforts to meet reporting requirements, as well as providing limited data quality reviews and notifying the recipient of the need to make appropriate and timely corrections. The plan says that reviewers are to conduct two types of data quality checks – an automated and a manual review. The automated review will validate various data elements for financial assistance against its grant management system, such as prime award numbers, recipient DUNS numbers, and amounts of awards. The manual review will identify outliers in certain data elements, such as whether the reported number of jobs created is reasonable. According to Education officials, upon their initial review of recipient reported data, the most common errors were relatively small—such as mistyped award numbers or incorrect award amounts—and were easily addressed and corrected during the agency review period. Department officials told us that they provided technical assistance to states and were able to have states correct the errors such that almost all of them were corrected before the October 30 deadline. Furthermore, state officials generally provided positive feedback to the department for these efforts, according to Education officials. Education’s Office of Inspector General (OIG) examined Education’s process for reviewing the quality of recipient reported data and found that Education’s data review process was generally adequate. The OIG’s review determined that Education has established a process to perform limited data quality reviews intended to identify problems, such as questionable expenditure patterns or job estimates. OIG also acknowledged that Education developed a process to correct any issues that Education officials find by contacting the recipients who submitted the report. In addition, OIG noted that the department plans to review quarterly data at a state level to determine whether there are systemic problems with individual recipients and that Education plans to use the reported information as a management tool. State educational agencies (SEA) also have taken action to collect and report jobs data and to ensure data quality. State officials in Arizona, Massachusetts, New Jersey, and New York and officials in the District of Columbia told us that they adapted their existing data systems or created new ones to track and report jobs data. For example, Massachusetts Department of Education officials created an online quarterly reporting web site to collect jobs data from its LEAs and detailed information on personnel funded by Recovery Act grants. In addition, many SEA and LEA officials we spoke with reported taking steps to ensure data quality, such as pre-populating data fields (that is, inserting data, such as DUNS numbers, into the recipient reporting template for the LEAs), checking the reasonableness of data entered, and looking for missing data. In addition to tracking and reporting jobs data and taking steps to ensure data quality, SEA officials reported providing technical assistance, such as written guidance and Web-based seminars, that explain how LEAs should report job estimates. For example, California state officials had LEAs submit their data through a new web-based data reporting system and, prior to implementing the new system, provided written guidance and offered a web-based seminar to its LEAs. Despite efforts to ensure data quality, state and local officials we spoke with raised some concerns about the quality of jobs data reported in October 2009. For example, LEAs were generally required to calculate a baseline number of hours worked, which is a hypothetical number of hours that would have been worked in the absence of Recovery Act funds. LEA officials were to use this baseline number to determine the number of hours created or retained and to subsequently derive the number of FTEs for job estimates. Each LEA was responsible for deriving its own estimate. New Jersey state officials we interviewed told us that it was likely that LEAs used different methods to develop their baseline numbers, and as a result, LEAs in the same state may be calculating FTEs differently. (See appendix II for a complete description of the calculations used to determine baseline number of hours worked, number of hours created or retained, and FTEs for jobs created or retained). According to Illinois state officials, some of their LEAs had double-counted the number of positions, attributing the positions to both state fiscal year 2009 (which ended on June 30, 2009) and fiscal year 2010 (beginning July 1, 2009), in part because the reporting period covered both of the state’s fiscal years. Also, according to Illinois officials, other school districts estimated that zero positions were attributable to the Recovery Act. In those cases, LEA officials received Recovery Act funds before finalizing staff lay-offs. Since they had not officially laid off any staff, Illinois officials told us that LEA officials were unsure as to whether those jobs would count as “jobs saved” and believed it best to report that no jobs had been saved because of Recovery Act funding. Illinois officials told us that Education reviewed Illinois’ data, but did not ask them to make any corrections, but instead asked the state to disaggregate the job estimates by type of position, such as teachers and administrators. Also, one LEA official from New York reported that he did not have enough time to conduct the necessary data quality checks he wanted to perform. Education officials acknowledged that many state and local officials reported various challenges in understanding the instructions and methodology that Education suggested they use to calculate job estimates. According to Education officials, when states contacted the department to report these problems, Education officials provided technical assistance to resolve the state’s specific issues. States faced challenges due to the timing of guidance or changes in guidance on how to estimate jobs attributable to the Recovery Act, according to Education officials and several state officials we interviewed. For example, Colorado officials reported that, based on June 22, 2009 guidance from OMB, they believed that subrecipient vendors’ jobs would be considered “indirect jobs” and therefore LEAs would not have to provide estimates of their vendors’ jobs in their reports. Colorado officials told us they received guidance at Education’s August technical assistance conference indicating that subrecipients (in this case, LEAs) are supposed to include vendor job estimates based on those jobs directly funded by Recovery Act grants. However, Education’s guidance did not clearly distinguish between direct and in-direct vendor jobs, according to state officials, making it difficult for LEAs to determine which vendor jobs to include in their section 1512 reports. State officials also reported receiving further guidance on estimating jobs from Education on September 15 and attending a related technical assistance conference on September 21. On September 16, the Colorado SEA issued guidance stating that LEAs would be responsible for including vendor jobs in the job estimates they would be reporting. (Colorado’s LEA reports were due to the SEA on September 25, because the SEA was required to submit its data to the state controller’s office on September 29 for centralized reporting.) Also, officials in California—where LEAs had to report to the SEA on September 23—said they were not notified until Education’s September 21 conference that all LEAs that received Recovery Act funds had to register in the Central Contractor Registration. They told us that this contradicted previous guidance from Education and would have required LEAs to register within 2 days to meet their state’s September 23 deadline. California officials advised federal officials that the state would implement this requirement for the second quarterly reporting period. Education officials and officials in two states mentioned actions that might improve the reporting and data quality processes before the next reports are due in January 2010. Education officials suggested a number of possible changes in Federalreporting.gov, such as allowing Education to pre-populate some basic state data, such as grant award numbers and amounts, would decrease the workload for states and help avoid some technical errors. Also, in response to problems such as LEAs counting jobs in two fiscal years, Education plans to provide more guidance in early December 2009 to states on calculating job estimates. At the state level, officials in Georgia reported plans to make changes to the state’s processes, such as adding internal edit checks so that those who enter the data will have to make corrections as part of the data entry process. Also, Illinois has created an office to work with state agencies to improve their data reporting processes, according to a state official. The state also plans to build in more checks to its review of agency data, for example, a check that would compare jobs data against existing employment data to confirm that districts are not reporting more positions than exist in the district. As recipient reporting moves forward, we will continue to review the processes that federal agencies and recipients have in place to ensure the completeness and accuracy of data, including reviewing a sample of recipient reports across various Recovery Act programs to assure the quality of the reported information. As existing recipients become more familiar with the reporting system and requirements, these issues may become less significant; however, communication and training efforts will need to be maintained and in some cases expanded as new recipients of Recovery Act funding enter the system. In addition to our oversight responsibilities specified in the Recovery Act, we are also reviewing how several federal agencies collect information and provide it to the public for selected Recovery Act programs, including any issues with the information’s usefulness. Our subsequent reports will also discuss actions taken on the recommendations in this report and will provide additional recommendations, as appropriate. We are making two recommendations to the Director of OMB. To improve the consistency of FTE data collected and reported, OMB should continue to work with federal agencies to increase recipient understanding of the reporting requirements and application of the guidance. Specifically, OMB should clarify the definition and standardize the period of measurement for FTEs and work with federal agencies to align this guidance with OMB’s guidance and across agencies, given its reporting approach consider being more explicit that “jobs created or retained” are to be reported as hours worked and paid for with Recovery Act funds, and continue working with federal agencies and encourage them to provide or improve program specific guidance to assist recipients, especially as it applies to the full-time equivalent calculation for individual programs. OMB should work with the Recovery Board and federal agencies to reexamine review and quality assurance processes, procedures, and requirements in light of experiences and identified issues with this round of recipient reporting and consider whether additional modifications need to be made and if additional guidance is warranted. The jobs data reported by recipients of Recovery Act funds provide potentially useful information about a portion of the employment effect of the act. At this point, due to issues in reporting and data quality including uncertainty created by varying interpretations of the guidance on FTEs, we cannot draw a conclusion about the validity of the data reported as a measure of the direct employment effect of spending covered by the recipient reports. Even after data quality issues are addressed, these data will represent only a portion of the employment effect. Beyond the jobs that are reported, further rounds of indirect and induced employment gains result from government spending. The Recovery Act also includes entitlement spending and tax benefits, which themselves create employment. Therefore, both the data reported by recipients and other macroeconomic data and methods are necessary to understand the overall employment effects of the stimulus. Economists will use statistical models to estimate a range of potential effects of the stimulus program on the economy. In general, the estimates are based on assumptions about the behavior of consumers, business owners, workers, and state and local governments. Against the background of these assumptions, themselves based on prior research, the effects of different policies can be estimated. Any such estimate is implicitly a comparison between alternative policies. The reliability of any alternative scenario that is constructed depends on its underlying assumptions and the adequacy of evidence in support of those assumptions, as well as on the accuracy of the data that form the basis for what is observed and on how well the model reflects actual behavior. In the broadest terms, economic research using macroeconomic models suggests general rules of thumb for approximating the job impact and the GDP increase for a given amount of stimulus spending. In constructing their estimates of the employment impacts of the act, CEA observed that a one percent increase in GDP has in the past been associated with an increase in employment of approximately 1 million jobs, about three quarters of 1 percent of national employment. Similarly, CBO economists have assumed that a one percent increase in output generates somewhere between 600,000 and 1.5 million jobs. As a result, projections of the employment impact of the Recovery Act can be generated from macroeconomic models that estimate output, providing the basis for estimates of changes in employment. CEA estimates of the employment effects of the Recovery Act have been based on statistical projections and allocations using historical relationships. In January 2009, the incoming administration projected the anticipated effects of fiscal stimulus on output and employment in the economy, specifying a prototypical spending package of tax cuts, payments to individuals, and direct spending by federal and state government. The effects of such additional spending on output (GDP) were projected using multipliers, values based on historical experience that estimate the output change per unit of different types of changes in government spending. These output increases were translated into employment effects using a rule of thumb, again based on history, that a 1 percent rise in GDP yields 1 million jobs. The incoming administration’s January 2009 analysis of a prototypical stimulus package found that it would be expected to increase GDP by 3.7 percent and increase jobs by 3,675,000 by the fourth quarter of 2010. The analysis compared the unemployment rate with and without the stimulus. At that time, the unemployment rate for 2009 was projected to be 8 percent with a stimulus and closer to 9 percent without. In May 2009, CEA reported on the anticipated employment effects of the actual Recovery Act as passed by Congress and signed into law by the President. That analysis was consistent with the January projections that the Recovery Act (which was deemed to closely resemble the prototypical package earlier assumed) would result in approximately 3.5 million jobs saved or created by the end of 2010, compared to the situation expected to exist in the absence of the act. Later, when the actual unemployment rate rose beyond 9 percent, the administration acknowledged that its earlier projections of unemployment were too low but asserted that, without the Recovery Act, the rate would have been even higher than observed. In September 2009 CEA reported on the effects of Recovery Act spending through the end of August. It noted that statistical analysis of actual economic performance compared to that which might have been expected in the absence of the Recovery Act suggested that the Recovery Act had added “roughly” 2.3 percentage points to GDP in the second quarter and was likely to add even more in the third. Translating that output gain into employment, CEA surmised that employment in August was 1 million jobs higher than it would have been without the act. The recipient reports are not estimates of the impact of the Recovery Act, although they do provide a real-time window on the results of Recovery Act spending. Recipients are expected to report accurately on their use of funds; what they are less able to say is what they would have done without the benefit of the program. For any disbursement of federal funds, recipients are asked to report on the use of funds to make purchases from business and to hire workers. These firms and workers spend money to which they would not otherwise have had access. Recipients could not be expected to report on the expansionary effects of their use of funds, which could easily be felt beyond local, state, or even national boundaries. Neither the recipients nor analysts can identify with certainty the impact of the Recovery Act because of the inability to compare the observed outcome with the unobserved, counterfactual scenario (in which the stimulus does not take place). At the level of the national economy, models can be used to simulate the counterfactual, as CEA and others have done. At smaller scales, comparable models of economic behavior either do not exist or cover only a very small portion of all the activity in the macroeconomy. The effect of stimulus on employment depends on the behavior of the recipient of aid. For consumers, it depends on the extent to which their total spending increases. For business firms, it depends on the increase, if any, in their purchases from other business firms or their payrolls. For state and local governments, it is the increase in their purchases of goods and services and their own employment rolls. Within any given group of recipients, choices to spend or save will vary. For example, a consumer with a large credit card balance may use a tax cut to pay down the balance or save more rather than increasing spending. Given that the personal savings rate fell to essentially zero before the recession, households may well choose to rebuild savings rather than spend. A business firm might not see additional capital spending or hiring as advantageous. A state government might decide to bolster its reserves where permitted under law rather than increase its outlays or cut its taxes. In each case, the strength of the program as immediate stimulus is weakened to the extent that all funds are not spent. The extent to which the initial spending reverberates throughout the economy is summarized by a multiplier, a measure of the cumulative impact on GDP over time of a particular type of spending or tax cut. The resulting change in output translates into a change in employment. In the context of the Recovery Act recipient reports, the output and employment effects will likely vary with the severity of the economic downturn in a recipient’s location (as reflected by distress in labor markets and the fiscal positions of governments), and the amount of funds received by the recipient. The nature of the projects or activities to which the recipient applies its funds also matters, whether the projects use labor intensively and whether those who are hired will themselves spend or save their earnings. Economists use computer models of the U.S. economy with historical data on employment, GDP, public spending, taxes, and many other factors to study the effects of monetary (e.g., changes in interest rates) and fiscal policies (e.g., changes in government taxing and spending) designed to affect the trajectory of the economy. In general, a fiscal stimulus program like the Recovery Act is aimed at raising aggregate demand – the spending of consumers, business firms, and governments. This may be accomplished by means of tax cuts, grants-in-aid, or direct Federal spending. In response, the recipients may purchase more goods and services than they would have otherwise. This could lead to governments and business firms refraining from planned dismissal of employees or to hiring additional workers. The stimulus may lead to an overall, net increase in national employment and economic output. Models of the nation’s economy can provide estimates of changes in GDP and employment that result from changes in monetary or fiscal policies. In assessing the effects of fiscal policies such as additional government spending or tax cuts on GDP, macroeconomic models can be used to estimate “multipliers,” which represent the cumulative impact on GDP over time of a particular type of spending or tax cut. Multipliers translate the consequences of a change in one variable, such as in the demand for goods and services brought about by economic stimulus, on other variables, such as the supply of those goods and services and employment, taking into account “ripple effects” that occur throughout the economy. The size of the multiplier depends on the extent to which changes in additional government spending or revenue translates into changes in spending by recipients and beneficiaries of the additional spending. Spending increases the multiplier, and saving reduces it. The multiplier is also larger when there is slack in the economy (unemployed persons and idle productive capacity). Also, the expansionary effects of government spending are greater when stimulus funds are borrowed rather than raised by taxation. Finally, the multiplier effect in the U.S. will be greater to the extent that new spending, whether by government or individuals, is devoted to domestically-produced goods and services. In general, macroeconomic models and estimated multipliers can provide insights on the potential effect of different types of public spending. Because of the limited historical experience with fiscal stimulus of the magnitude of the Recovery Act, there is uncertainty about the extent to which the multipliers estimated using historical data about the effect of previous business cycles will accurately reflect the stimulus effect this time around. Economic research, however, has developed a basis for constructing reasonable ranges of values. In projecting the anticipated effect of the Recovery Act on national output, the CBO grouped the act’s provisions according to the size of the multiplier—that is, the magnitude of the effect of a particular provision’s spending on GDP (see table 7). Drawing on analyses based on past experience with the results of government spending, CBO has identified a range of 1.0 to 2.5 for multipliers. For example, a multiplier of 1.0 means a dollar of stimulus financed by borrowing results in an additional dollar of GDP. CBO assumes larger multipliers for grants to state and local governments for infrastructure spending, and lower values—0.7 to 1.9—for transfers not related to infrastructure investment. Figure 5 shows the distribution of Recovery Act funds by multiplier. The employment effects of Recovery Act funds will likely vary with the strength of the labor market in a recipient’s location. Recipients located in areas where labor markets are weak, that is, where unemployment is high, may find it easier to hire people and may be able to do so at lower wages than those located in areas where the recession has had little effect on labor markets. Consequently, recipients located in areas with weak labor markets may be able to employ more people than those located in areas with strong labor markets, all else being equal. The percentage of the nation’s labor force that is unemployed has reached a level not seen in decades. For example, the unemployment rate reached 10.2 percent in October 2009, its highest rate since April 1983. The national unemployment rate was 4.9 percent in December 2007, the month that marked the end of the last business cycle and the beginning of the current recession. In general, the unemployment rate rises and falls over the course of the business cycle, generally increasing during a recession and decreasing during an expansion. Cyclical changes in the national unemployment rate reflect changes in state unemployment rates. State unemployment rates vary over time in much the same way that the national unemployment rate varies—increasing during recessions, decreasing during expansions, but changing direction at different times. Estimates of current labor market strength, as measured by the unemployment rate, differ across states. Figure 6 ranks states according to the most recent available unemployment data—September 2009. While the national unemployment rate at the time was 9.8 percent, state unemployment rates ranged from a minimum of 4.2 percent in North Dakota to a maximum of 15.3 percent in Michigan. Twenty-seven states had unemployment rates in September 2009 that were less than the national unemployment rate by one percentage point or more, and nine states and the District of Columbia had unemployment rates that exceeded the national unemployment rate by one percentage point or more, and 14 states had unemployment rates that were within one percentage point of the national unemployment rate. Labor markets in every state weakened over the course of the recession, but the degree to which this has occurred varies widely across states. Figure 7 shows the geographic distribution of the magnitude of the recession’s impact on unemployment as measured by the percent change in unemployment between December 2007 and September 2009. Alabama’s unemployment rate has grown the most over this period, increasing by about 182 percent. Other states with relatively high unemployment rate growth over this period include Florida, Hawaii, Wyoming, Idaho, and Nevada, all of which have seen their unemployment rates increase by more than 120 percent. At the other end of the spectrum are states like Minnesota, Mississippi, Arkansas, North Dakota, and Alaska. Unemployment rates in these states have grown by less than 60 percent between December 2007 and September 2009. Alaska’s unemployment rate growth during this period has been the slowest, measuring only about 33 percent. While the recession has weakened labor markets in every state, those in some states may be showing signs of recovering. Table 8 lists the states for which unemployment rates in September 2009 are less than their peak unemployment rates. The unemployment rate peaked in some states as early as May 2009. In several additional states, the unemployment rate was higher in June or July than it was in September. Although unemployment rates in these states may start to increase again in the future, for the moment it seems that labor markets in these states are getting stronger. Table 9 shows the change in employment between December 2007 and September 2009. Employment in Arizona, Florida, Georgia, Michigan, Nevada, and Oregon in September 2009 was over 7 percent lower than it was in December 2007 in each state. On the other hand, employment in Louisiana and South Dakota fell by less than two percent over the same period, and employment in Alaska, North Dakota, and the District of Columbia has increased during that time. Employment has declined since December 2007, when the current recession began. However, some signs have appeared that the losses in employment are slowing. Job losses in October 2009 numbered 190,000. This number is about equal to average job losses of about 188,000 per month in August, September, and October 2009. The rate at which employment has declined over the past three months is thus lower than the rate at which it declined in May, June, and July 2009, when job losses averaged about 357,000 per month. The rate at which employment has declined over the past three months is thus also lower than the rate at which it declined between November 2008 and April 2009, when job losses averaged about 645,000 per month. The current employment contraction has been more pronounced in the goods-producing sector, in which employment fell by about 17 percent between December 2007 and October 2009, than the service-providing sector, in which employment fell by about three percent over the same period. The goods-producing sector includes the construction and manufacturing industries, in which employment has fallen by about 21 percent and 15 percent, respectively, between December 2007 and October 2009. The goods-producing sector also includes the mining and logging industry, which lost about 6 percent of its jobs during the same time. Service-providing industries include financial activities, information, professional and business services, and trade, transportation, and utilities, all of which had employment declines of more than six percent between December 2007 and October 2009. Employment declines in the leisure and hospitality industry were about three percent, and employment in education and health services increased by about 4 percent at the same time. The employment effects of Recovery Act funds allocated to state and local governments will also likely vary with their degree of fiscal stress, as well as with the factors mentioned above. Because recessions manifest in the form of lower output, employment, and income, among other things, reductions in output, employment, and income lead state and local governments to collect less tax revenue and at the same time cause households’ demand for publicly provided goods and services to increase. State governments often operate under various constraints, such as balanced budget requirements, so they generally must react to lower tax revenues by raising tax rates, cutting publicly provided programs and services, or drawing down reserve funds, all but the last of which amplify recessionary pressure on households and businesses. Local governments must do the same unless they can borrow to make up for lost tax revenue. By providing funds to state and local governments, the Recovery Act intends to forestall, or at least moderate, their program and service cuts, reserves liquidation, and tax increases. In addition to the type of spending undertaken, the size of the multiplier and resultant employment effects will depend on the extent to which aid is not diverted to reserves. Generally speaking, states with weaker economies and finances will be more likely to spend Recovery Act dollars. States that may suffer little or no harm from a national downturn are less motivated to make full use of any federal assistance. Rather than increase spending, they may choose to cut taxes or, where permitted by law, add to their reserves. Tax cuts would have some simulative effect, bu additions to reserves would reduce any multiplier effect. The increase FMAP available under the Recovery Act is for state expenditures for d Medicaid services. However, the receipt of this increased FMAP may reduce the state share for the Medicaid programs. States are prohibited from using any funds directly or indirectly attributable to the increased but states have reported using funds FMAP for state rainy day funds, made available a result of the increased FMAP for a variety of purposes including offset of general fund deficits and tax revenue shortfall. The availability of reserves and the possibility of borrowing points out the difficulties of gauging the impact of federal policy by the observed timing of aid flows. The expectation of aid could encourage governments to draw more out of reserves or to borrow more than they would otherwise. The rationale is that the expected aid would replace the reserves or liquidate the new debt. In this way, the timing of aid could postdate the impact. Research on individual consumption has long wrestled with the problem of how expectations influence household decisions. State and local governments must also look forward in making fiscal decisions. The recession has substantially affected the states’ fiscal conditions. In recessions, state and local governments are motivated to enact “pro- cyclical” measures that aggravate the downturn. Balanced budget requirements and other constraints cause them to reduce spending and raise taxes, generating what is called “fiscal drag.” Federal assistance can reduce the need for such measures. In this way, the negative employment effects of fiscal drag can be precluded and existing jobs can be saved. With sufficient aid, it is possible for state and local governments to go beyond saving existing jobs to creating additional ones. However, there are likely to be limits to the abilities of governments to spend aid quickly enough to affect employment The recession has substantially reduced states’ and local governments’ combined tax revenues. Figure 8 indicates that tax revenue collected in the second quarter of 2009 fell from the peak in the second quarter of 2008 by more than $130 billion. State and local revenues are not likely to return to their previous levels until well after the recession has ended. After the 2001 recession, tax receipts did not begin to recover until after second quarter of 2003, well after the ‘official’ end of the recession in fourth quarter of 2001. However, the fall in receipts after the second quarter of 2008 is dramatic. In a survey of the nation’s state governments, the National Governors Association reported that outlays for current services provided through states’ general funds decreased by 2.2 percent in fiscal year 2009, which ended in June 2009 for most states. Spending for fiscal year 2010 is projected to fall by 2.5 percent. In light of average annual increases of five percent for total state and local government outlays, any decrease is a significant adjustment. Most states have some sort of requirement to balance operating budgets. However, most state governments are able to establish reserve funds. Maintenance of a baseline of five percent of annual outlays for a state’s fund is regarded by state budget officers as prudent. A lower level could increase a state’s borrowing costs. Since 2006 these funds have decreased. In the wake of the 2001 recession, according to an analyst at the Rockefeller Institute of Government, state governments in fiscal year 2002 drew as much as 4.8 percent of their revenues from fund balances. The National Governors Association reports that fund balances peaked in 2006 at $69 billion, at 11 percent of general fund expenditures. The funds declined to 9.1 percent by 2008 and were estimated at 5.5 percent—$36.7 billion—in June 2009. However, by fiscal year 2010, these funds are projected to fall to 5.3 percent of outlays. In addition, for 2009 there is variation in state government reserves. For example, 11 states had total reserves in excess of 10 percent of outlays, while others, such as California, had total reserves less than 1 percent of outlays. This may be seen in figure 9. Diversity in the economic and fiscal conditions of the states and differences in the size and composition of Recovery Act funds they receive suggest that the potential for employment gains varies across states. We will continue work in this area, along with our other work on federal-state fiscal interactions. In commenting on a draft of our report, OMB staff told us that OMB generally accepts the report’s recommendations. It has undertaken a lessons learned process for the first round of recipient reporting and will generally address the report’s recommendations through that process. We are sending copies of this report to the Office of Management and Budget and to the Departments of Education, Housing and Urban Development, and Transportation. The report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact J. Christopher Mihm or Susan Offutt at (202) 512-5500. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. The Office of Management and Budget (OMB) and federal agencies have provided wide-ranging guidance to states on how to report full-time equivalent (FTE) data—that is, jobs created or retained. OMB staff reported that questions on FTEs dominated the inquiries they fielded during the first round of recipient reporting, and recipients had various understandings of how to report an FTE. Following are selected examples of the challenges of reporting and calculating FTEs, as seen through public housing agencies and four states—California, Florida, Georgia, and Massachusetts. As we reported in September 2009, the Department of Housing and Urban Development (HUD) is using two methods to satisfy reporting requirements for public housing agencies under the Recovery Act. First, OMB and the Recovery Act Board have created and manage www.Federalreporting.gov (Federalreporting.gov), a Web site where all Recovery Act recipients can report on the nature of projects undertaken with Recovery Act funds and on jobs created or retained. Second, HUD developed the Recovery Act Management and Performance System (RAMPS) in response to reporting requirements outlined in section 1609 of the Recovery Act. HUD officials said approximately 96 percent of housing agencies had successfully reported into Federalreporting.gov. Initial reports suggested a lower reporting rate, but this was due to a substantial number of housing agencies incorrectly entering values into certain identification fields, such as the award ID number, the awarding agency, or the type of funding received. HUD officials said that the system did not have validation measures in place to ensure the correct award ID numbers were entered. In addition, housing agencies could not edit the award ID number without submitting a new report. According to a HUD official, OMB initially classified reports that could not be matched with a federal agency as “orphaned.” The HUD official told us HUD program and Recovery team staff reviewed reports submitted with nonmatching award ID numbers and OMB’s list of reports that could not be matched to determine if they matched HUD awards. According to HUD officials, public housing agencies encountered challenges related to registration and system accessibility. For example, a HUD official said the registration process for Federalreporting.gov requires several steps such as obtaining a DUNS number, registering with the Central Contractor Registration (CCR) and obtaining a Federal Reporting Personal Identification Number (FRPIN). The HUD official told us these steps are necessary for validating the recipient reports because they ensure the appropriate points of contact at the appropriate organizations—in this case, public housing agencies—are reporting for each program. The Federalreporting.gov Web site states that each recipient’s point of contact information is taken directly from the CCR and if an organization changes its point of contact information it will take 48 hours for Federalreporting.gov to receive the change and e-mail the FRPIN and temporary password to the new point of contact. According to the HUD official, a housing agency’s contact information in CCR is sometimes outdated and the systems are often not updated in time for access to be correctly transferred. Additionally, one housing agency official reported he saved his data entry as a draft before being timed out of the system, but was unable to retrieve the data when he reentered the reporting Web site. A HUD official said in the future, HUD and OMB will need to improve the function of the system and the official said that they are working to ensure all housing agencies have access to the reporting systems. According to a HUD official, there was widespread misunderstanding by public housing agencies about OMB’s methodology for calculating the number of jobs created or retained by the Recovery Act, in part because housing agencies are not familiar with reporting jobs information. In a few cases, we found that public housing agencies had reported the number of jobs created or retained into Federalreporting.gov without converting the number into full-time equivalents. For example, officials from one housing agency reported the number of people, by trade, who worked on Recovery Act related projects, but did not apply the full-time equivalent calculation outlined by OMB in the June 22 reporting guidance. Additionally, officials from another public housing agency told us that they based the number of jobs they reported into Federalreporting.gov on letters from their contractors detailing the number of positions rather than full-time equivalents created as a result of their Recovery Act-funded projects. In another case, a housing agency official reported having difficulty locating guidance on calculating job creation. As a result, the housing agency may have underreported jobs data from an architectural firm providing design services for a Recovery Act window replacement project at a public housing complex. HUD officials cited the fact that OMB and HUD provided additional clarification and guidance close to the deadline for recipient reporting as a factor in housing agencies’ confusion about the methodology for counting jobs. According to a HUD official, HUD was in discussions with OMB about finalizing and clarifying portions of the June 22, 2009, job guidance right up to the end of September. In early September, HUD posted the OMB guidance to its Web site and provided information by e-mail to housing agencies on registration for Federalreporting.gov, as well as links to Web seminars and training provided by OMB. HUD issued further guidance to public housing agencies by e-mail on September 25, 2009, approximately 2 weeks before the October 10, 2009, deadline for recipient reporting, providing templates and data dictionaries tailored to the Public Housing Capital Fund. The guidance also reiterated the recipient reporting responsibilities for public housing agencies. HUD officials told us they did not have enough time to translate some of the terminology into concrete terms that would be clearer to housing agency officials. For example, HUD posted a jobs calculator spreadsheet to its Web site, and HUD field staff would direct housing agencies to this guidance when they asked specific questions about how to calculate jobs. Nonetheless, greater instruction may be needed beyond what was provided to housing agencies on the job calculator’s instructions page. A HUD official said it seemed like some housing agencies may have pulled information for the recipient reports from the wrong fields in the job calculator, which produced errors. A HUD official stated they will work with OMB to improve housing agencies’ understanding of the methodology for reporting in full-time equivalents prior to the next round of recipient reporting in January 2010. State officials from the California Recovery Task Force and the California Office of the State Chief Information Officer (CIO) explained that while the centralized reporting structure had several benefits, challenges with changing reporting requirements from federal agencies and technological glitches still occurred. As a centralized reporting state, each state agency reported directly to the CIO through the California ARRA Accountability tool. The Task Force is responsible for uploading the data to Federalreporting.gov. However, according to state officials, local government agencies that received direct Recovery Act dollars from the federal government are not under the Task Force’s purview of the state officials and report to Federalreporting.gov on their own. State officials stated that a centralized reporting structure allows the CIO to act as a liaison between OMB and the state for faster reconciliation of issues. The CIO, on behalf of the task force, was responsible for collecting, validating, and uploading data from state agencies to Federalreporting.gov. The state officials believed the process went well overall and commended their state team for successfully reporting into Federalreporting.gov. The Task Force officials believed the reporting process could be improved if OMB provided a comprehensive list of awards to better crosscheck reporting. California officials stated that many of the challenges in reporting did not come from the additional information requested during October 11 to 20, but from changes immediately prior to the September 30 cut-off date. These changes included issues such as the Department of Education’s request to include Central Contract Registration numbers on September 21, and FHWA’s changes to four of the data elements, including the award amounts. California officials have a greater appreciation of what to expect during the reporting process. They believe that the continuous communication with the state agencies, including weekly data group meetings at which as many as 60 people attended, contributed to the overall success of the reporting process. They also have been developing their own internal logic checks to assist with data validation. California officials continue to be concerned that problems at Federalreporting.gov and changing agency requirements will cause subrecipient data, initially correctly collected in accordance with federal guidance, to be rejected, which will result in penalties for late submissions. The Florida Department of Transportation (FDOT) has reporting requirements under both sections 1512 and 1201 of the Recovery Act. Although the state had an existing system in place that could be used for section 1201 reporting, officials decided to develop two additional systems for 1512 reporting. One system was created to assist FDOT in reporting information to the state Recovery Czar and a second system for employment reporting was created to allow subrecipients to enter total number of employees, payroll, and employee hours for Recovery Act- funded highway projects. According to state officials, the system was launched on May 29, 2009, and is currently in use. FDOT officials experienced no significant reporting problems while submitting more than 400 reports. Florida began preparing for reporting early and conducted extensive training to assist contractors, consultants, and local agencies in the collection of employment data required by the Recovery Act. For example, FDOT’s Office of Inspector General (OIG) developed five computer-based training modules to assist department staff and external partners in the use of the electronic reporting system. FDOT also partnered with its OIG and the Florida Division of the Federal Highway Administration (FHWA) to conduct town hall presentations for its seven District Offices and Florida’s Turnpike Enterprise. The presentations were designed to ensure consistent use of the electronic employment data application. In September, OIG followed up with a survey to local agencies to determine the levels of proficiency for using the department’s electronic employment reporting system and to solicit feedback. FDOT’s electronic employment data reporting system provides for several levels of data review and approval. For example, once the subrecipient enters their monthly employment data into the electronic system, the data is available for review and subsequent approval by the local agency project manager. Once approved, the data is available for review and approval by the department’s district office project manager. The district office project manager performs a reasonableness check of the submitted data prior to electronically approving the same. The electronic employment data is then available for review by OIG where two types of analyses are performed. First, OIG identifies whether the subrecipient should be reporting job data by comparing submitted data (and subrecipient identifiers) against the master list of awarded Recovery Act transportation projects. Second, OIG compares previously submitted subrecipient information against information contained in its current submission to determine any data anomalies or variances. Should any significant data anomalies or variances occur, OIG will contact the appropriate district and local agency. FDOT did not require subrecipients to submit verification of their job data but subrecipients were advised by FDOT to maintain documentation for review. For two subrecipients we visited, we found the extent to which documentation was being maintained varied. For example, one subrecipient kept time-sheets for all employees associated with Recovery Act projects, while another had documentation for its hourly employees but not its management employees. Reporting Process: In Georgia, one of the highway contractors we visited noted that it was responsible for reporting on about 30 Recovery Act- funded projects with approximately 10 subrecipients for each project. The contractor stated that they are required to fill out a monthly report (FHWA Form 1589) indicating the number of employees, the hours worked, and the dollars charged to the job through a direct portal created by Georgia Department of Transportation (GDOT). According to the contractor, this reporting requirement is in the contract, and GDOT will withhold payment if this report is not completed. As the general contractor, the firm is also responsible for collecting the 1589 information from its subcontractors on each job. Officials with the firm stated that they would withhold payment from the subcontractors if they fail to provide the information. We examined these contracts and confirmed these requirements. In addition to the 1589 report, the contractor also submits certified payroll to GDOT on a monthly basis. Guidance and Challenges: In terms of guidance, the contractor noted that there was not a lot of training provided but that they did not necessarily need much training. The main challenges raised were issues with making changes within the GDOT system and the DUNS number field. For example, officials explained once a report was submitted into GDOT’s system, it could not be edited, which made errors in entry or reporting difficult to correct. The contractor has discussed this issue with GDOT and hopes a solution will be reached for the next reporting cycle. The DUNS number requirement was an issue for several subrecipients since they did not have a number and they were under the impression that a cost was involved in obtaining a number, which there was not. After discussions with GDOT, it was determined that subrecipients did not need a DUNS number, but the field could not be blank. Therefore, GDOT advised the contractor to have its subrecipients complete the file by entering “not applicable.” The contractor suggested that improvements in reporting could be achieved by delaying the reporting date to GDOT to allow more time to handle delays in payroll and obtaining supporting information. Overall, the contractor felt that the September report was the most accurate month reported to date and believed greater accuracy will be achieved over time. Data Quality: Officials of the highway contractor told us they think they have a handle on the process and were confident in the data submitted. In their words, “if it’s inaccurate, we paid somebody wrong” since the report comes out of their payroll system. In terms of data from subcontractors, the officials noted that their confidence varied somewhat across subcontractors. Officials explained that information varied, based on the capacity and expertise of the subcontractor (that is, experience in reporting and if a certified payroll is in place). Officials explained they had greater confidence in subcontractors that had certified payroll. They provided several examples of subrecipients who were truckers or haulers who are not familiar with reporting and often are a small operation of one employee. Officials noted that the number of truckers or haulers on a project is often large in order to meet disadvantaged business requirements. Officials questioned if truckers and haulers should be part of the job creation or retained count since similar positions may not be counted for subcontractors that provide materials such as pipe. Officials believed over time, subcontractors would become more comfortable and familiar with the process. Reporting Process: An official at a major highway contractor we interviewed in Massachusetts explained that one of his primary responsibilities as the Construction Cost Accountant is to certify payroll records and ensure compliance with federal labor standards. This company is the general contractor (or prime contractor) on six Recovery Act highway construction projects. A company official stated that that there was no additional burden associated with filing the quarterly recipient reports because they routinely report employment data to the Massachusetts Department of Transportation (MassDOT), Highway Division for federal-highway funded projects through the MassDOT Highway Division’s Equitable Business Opportunities (EBO) system. Although there were additional data elements required for Recovery Act projects, the company official noted that FHWA Form 1589 specifies these additional reporting elements, and they have been added to the EBO system to make it easier for contractors and subcontractors to report on a monthly basis. According to the company official, the process was very straightforward. Contractors and subcontractors log into the EBO system and can see detailed information on all the projects they are working on for the MassDOT Highway Division. Typically, by the 15th day of each month, contractors and subcontractors upload their certified payroll files into the EBO system. However, for the September submission, MassDOT’s Highway Division required contractors to submit their employment reports early by October 9, so that they could meet the state’s October 10 deadline of submitting the quarterly Recovery Act report. Guidance and Challenges: The official noted that the only guidance he received came from the MassDOT Highway Division in the form of training on the EBO system, which he said helped contractors and subcontractors transition from the old employment reporting system to the EBO system. He noted that for contractors that were used to working with complex accounting systems, this training was adequate, but for smaller contractors with little computer experience, the training could have been better. In general, the official observed that most contractors and subcontractors are very pleased with the new system because it interfaces so well with their existing accounting and certified payroll databases and because the cost is low. Data Quality: There are several steps for ensuring data quality. First, a company official explained that most large contractors and many subcontractors have accounting and payroll data systems that interface with the EBO database well, so they are able to upload data from these systems directly into the EBO system, eliminating the need to reenter employment data. However, some smaller contractors don’t have these systems and thus must enter the data by hand each month. The company official stated that he is not concerned with the quality of data because it is verified both internally and by the MassDOT Highway Division. The official explained that the MassDOT Highway Division puts the responsibility for ensuring that subcontractors file monthly reports with the general contractor, and his company ensures subcontractor compliance by withholding their reimbursements. Although it is rarely needed, the official noted that withholding payments to subcontractors is a very effective tool for getting subcontractors to submit their monthly reports. Furthermore, all subcontractor employment reports are verified against the daily duty log that is kept by the project supervisor, who is an employee of the company. The MassDOT Highway Division also posts resident engineers at each job site on a daily basis, and they keep a daily diary of employment and work status that is used to verify the data submitted by general contractors in the MassDOT Highway Division project management system. This is the same system that is used to generate contractor invoices for reimbursement. According to Education’s clarifying guidance on jobs estimation, local educational agencies (LEA) are generally required to calculate a baseline number of hours worked, consisting of a hypothetical number of hours that would have been worked in the absence of Recovery Act funds. Once LEA officials derive this number, they then deduct the number from actual hours worked by individuals whose employment is attributable to Recovery Act funding to determine the number of hours created or retained. They then derive the number of full-time equivalents (FTE) for jobs created or retained, as shown in table 10. Then, they divide the resulting number of hours created or retained by the number of FTE hours in the quarter or reporting period to determine the number of FTEs to report. For example, in the table above, Employees 3 and 6 went from being unemployed (0 hours of employment) in the hypothetical situation where no Recovery Act funds are available to full- time (520 hours) and part-time (300 hours) employment, respectively. Employee 2 went from part-time (300 hours) to full-time (520 hours). Employee 5 remained a part-time employee, but works an additional 100 hours in the reporting quarter. Taking the sum of actual hours worked in the reporting quarter (2460) and subtracting the hours worked in the hypothetical baseline quarter (1320), we are left with 1140 created or retained hours. For the first reporting quarter, LEA officials divided the result by the number of FTE hours in that quarter (520). The total FTEs created or retained in Quarter 1 is 2.19. Results should be reported cumulatively, so in the second reporting quarter (Q2), the total hours worked in Q2 will be added to the hours worked in Q1 and divided by the hours in a full-time schedule for two quarters (1040 hours). For example, if in quarter 2, all employees reported in quarter 1 are retained and the baseline remains unchanged, we would again have 1140 hours created or retained. To get the final cumulative FTE created or retained, officials would sum 1140 for quarter 1 with 1140 for quarter 2 to get 2280 total hours created or retained. Recipients should divide this by the sum of the hours in a full-time schedule for those two quarters (1040). The result is again 2.19 FTE created or retained in quarter 2. J. Christopher Mihm or Susan Offutt at (202) 512-5500. The following staff contributed to this report: J. Christopher Mihm, Nancy Kingsbury, and Katherine Siggerud (Managing Directors); Susan Offutt (Chief Economist); Susan Irving, Yvonne Jones, Thomas McCool, and Mathew Scire (Directors); Angela Clowers (Acting Director); Robert J. Cramer (Associate General Counsel); Thomas Beall, James McTigue, Max Sawicky (Assistant Directors); Judith C. Kordahl (Analyst-in-Charge); and Jaime Allentuck, Darreisha Bates, Don Brown, Stephen Brown, Tina Cheng, Andrew Ching, Steven Cohen, Michael Derr, Robert Dinkelmeyer, Shannon Finnegan, Timothy Guinane, Philip Heleringer, Don Kiggins, Courtney LaFountain, John McGrail, Donna Miller, Elizabeth Morrison, Jason Palmer, Beverly Ross, Tim Schindler, Paul Schmidt, Jennifer Schwartz, Jonathan Stehle, Andrew J. Stephens, James Sweetman, and William Trancucci. The state teams for the bimonthly Recovery Act letter reports also contributed to this report. | The American Recovery and Reinvestment Act of 2009 (Recovery Act) requires recipients of funding from federal agencies to report quarterly on jobs created or retained with Recovery Act funding. The first recipient reports filed in October 2009 cover activity from February through September 30, 2009. GAO is required to comment on the jobs created or retained as reported by recipients. This report addresses (1) the extent to which recipients were able to fulfill their reporting requirements and the processes in place to help ensure data quality and (2) how macroeconomic data and methods, and the recipient reports, can be used to assess the employment effects of the Recovery Act. GAO performed an initial set of basic analyses on the final recipient report data that first became available at www.recovery.gov on October 30, 2009; reviewed documents; interviewed relevant state and federal officials; and conducted fieldwork in selected states, focusing on a sample of highway and education projects. On October 30, www.recovery.gov (the federal Web site on Recovery Act spending) reported that more than 100,000 recipients reported hundreds of thousands of jobs created or retained. Given the national scale of the recipient reporting exercise and the limited time frames in which it was implemented, the ability of the reporting mechanism to handle the volume of data from a wide variety of recipients represents a solid first step in moving toward more transparency and accountability for federal funds. Because this effort will be an ongoing process of cumulative reporting, GAO's first review represents a snapshot in time. While recipients GAO contacted appear to have made good faith efforts to ensure complete and accurate reporting, GAO's fieldwork and initial review and analysis of recipient data from www.recovery.gov , indicate that there are a range of significant reporting and quality issues that need to be addressed For example, GAO's review of prime recipient reports identified the following: Erroneous or questionable data entries that merit further review: (1) 3,978 reports that showed no dollar amount received or expended but included more than 50,000 jobs created or retained; (2) 9,247 reports that showed no jobs but included expended amounts approaching $1 billion, and (3) Instances of other reporting anomalies such as discrepancies between award amounts and the amounts reported as received which, although relatively small in number, indicate problematic issues in the reporting. Coverage: While OMB estimates that more than 90 percent of recipients reported, questions remain about the other 10 percent. Quality review: While less than 1 percent were marked as having undergone review by the prime recipient, over three quarters of the prime reports were marked as having undergone review by a federal agency. Full-time equivalent (FTE) calculations: Full-time equivalent (FTE) calculations: Under OMB guidance, jobs created or retained were to be expressed as FTEs. GAO found that data were reported inconsistently even though significant guidance and training was provided by OMB and federal agencies. While FTEs should allow for the aggregation of different types of jobs--part time, full time or temporary--differing interpretations of the FTE guidance compromise the ability to aggregate the data. Although there were problems of inconsistent interpretation of the guidance, the reporting process went relatively well for highway projects. Transportation had an established procedure for reporting prior to enactment of the Recovery Act. In the cases of Education and Housing, which do not have this prior reporting experience, GAO found more problems. Some of these have been reported in the press. State and federal officials are examining these problems and have stated their intention to deal with them. |
This section describes the petroleum refining industry and the five key regulations that we reviewed. According to data from EIA, there were 143 petroleum refineries in the United States as of January 2013, with a capacity to process 17.8 million barrels of crude oil per day. While there are refineries in most regions of the country, most refining capacity (almost 90 percent) is located in the Gulf Coast, West Coast, and Midwest regions (see fig. 1). These refineries employed over 70,000 people in 2013. Refineries process crude oil into products primarily through a distillation process that separates crude oil into different fractions based on their boiling points, which can then be further processed into final products. One barrel of crude oil can be processed into varying amounts of gasoline, diesel, jet fuel, and other petroleum products depending on the configuration—or complexity—of the refinery and the type of crude oil that is being refined. Through the addition of specialized equipment, refineries can be optimized—or “upgraded”—to produce greater proportions of specific types of products or to use different types of crude oil. For example, a coker unit upgrades the low-value residual oil from the distillation process into higher value products such as diesel, increasing a refinery’s ability to process heavier crude oils. As shown in Figure 2, from a barrel of crude oil, U.S. refineries primarily produce gasoline, diesel, and jet fuel that are used in the transportation sector, along with heating oil and liquefied petroleum gases such as propane used in home heating. The U.S. petroleum refining industry consists of firms of varying sizes that, in addition to operating refineries, may also have operations in other related industry segments: (1) the upstream segment, which consists of the exploration for and production of crude oil; (2) the midstream segment, which consists of pipelines and other infrastructure used to transport crude oil and refined products; (3) the downstream segment, which consists of the refining and marketing of petroleum products such as gasoline and heating oil; and (4) the renewable fuels segment, where biorefineries produce renewable fuels that are blended with petroleum products at wholesale terminals before being distributed to consumers. To varying degrees, refiners may primarily operate refineries—these are called merchant refiners—or may be integrated, participating in various other related industry segments. HollyFrontier Corporation is an example of a merchant refiner that purchases crude oil from unaffiliated producers and sells refined products to other companies operating retail fuel outlets, while Chevron is an example of a fully integrated company, a refiner that also produces crude oil and operates pipelines and retail fueling outlets across the United States. Crude oil, petroleum products, and renewable fuels are transported between market participants through an extensive supply infrastructure including pipelines, tanker vessels, rail, trucks, wholesale terminals, and retail outlets. (See fig. 3.) In 2012, refineries received the majority of their crude oil by pipeline (over 50 percent) and by tanker vessel (37 percent), with trucks and rail generally playing a more limited role according to EIA data. As we reported in 2007, according to industry officials and experts, the refining industry was a low-return industry for much of the prior two decades. Retail prices for regular gasoline averaged $3.63 per gallon in 2012, the highest annual average price when adjusted for inflation since 1976, the earliest comparable data available from EIA. Retail prices have declined in 2013—gasoline averaged $3.55 in the first half of 2013 and $3.18 in November 2013—but are still near historic highs. Market dynamics anywhere along the supply chain can influence consumer prices, beginning with upstream crude oil production, the production of renewable fuels, through downstream refining and retailing. According to EIA data, increases in crude oil costs have been the largest component of the recent increases in gasoline prices. The refining component of prices—including labor, materials, energy, and other costs of the refining process, as well as profits to refinery owners—has fluctuated over time but has not increased in a significant way since 2000, when EIA began reporting estimates of the components of retail prices (see fig.4). The five key environmental regulations affecting the domestic refining industry that we reviewed are concerned with various health, environmental, and other issues. Under the RFS, since 2006, transportation fuels sold in the United States have been required to contain increasing amounts of renewable fuels such as ethanol and biodiesel. EPA is responsible for administering the RFS and annually issues regulations that establish the percentage of gasoline and diesel fuels that refiners, importers, and other obligated parties must ensure are renewable fuels. Congress established the RFS in light of concerns such as climate change and the nation’s dependence on imported crude oil. As shown in figure 5, the law generally required that transportation fuels contain 9 billion gallons of renewable fuels in 2008, and that volumes increase 4-fold through 2022 to 36 billion gallons. The Administrator of EPA is authorized to waive the RFS levels established in the act if the Administrator determines—in consultation with the Secretaries of Agriculture and Energy—that implementing the requirement would severely harm the economy or environment, that there is an inadequate domestic supply, or in certain other situations. The major source of renewable fuels has traditionally been ethanol produced from corn; however, as we reported in 2009, the increased cultivation of corn for ethanol, its conversion into renewable fuels, and the storage and use of these fuels could affect water supplies, water quality, air quality, soil quality, and biodiversity. Under the RFS’ statutory provisions, the increasing amounts of renewable fuels are to primarily come from renewable fuels other than corn ethanol—called advanced biofuels—that meet certain criteria, including reducing GHG emissions by at least 50 percent compared with the gasoline or diesel fuel they displace. According to EPA, achieving the RFS’ statutory blending levels in 2022 could result in total benefits—including those related to overall fuel costs, energy security, health, and GHG effects—of between $13 and $26 billion and could reduce GHG emissions by 138 million metric tons of carbon dioxide equivalent emissions, equal to taking about 27 million vehicles off the road. The federal government has regulated vehicle fuel economy through CAFE standards since 1978 and, more recently, aligned these standards with new GHG vehicle emission standards in a joint national program aimed at reducing oil consumption and GHG emissions from the transportation sector. CAFE standards are administered by the National Highway Traffic Safety Administration (NHTSA) and require that vehicle manufacturers meet fleet-wide average fuel economy standards for vehicles. The Energy Independence and Security Act of 2007 instituted several changes to the CAFE standards and, in 2009, the administration announced a new program to increase vehicle fuel economy and reduce vehicle GHG emissions, which was implemented by a joint rulemaking with NHTSA raising CAFE standards and EPA establishing the first GHG emissions standards for vehicles. Although the CAFE and GHG vehicle emission standards are distinct, their targets were aligned for compliance purposes. NHTSA and EPA put the national program into place by issuing coordinated regulations covering vehicle model years 2012 to 2025. As shown in figure 6, fuel economy standards for cars largely remained unchanged from 1990 through 2010, but vehicle manufacturers are now expected to meet increasingly stringent standards reaching the projected combined average fuel economy of about 50 miles per gallon by 2025— about 80 percent more efficient than required under the 2011 standards. EPA estimated that the 2011-2025 standards may save consumers and businesses $1.7 trillion, reduce oil consumption by 12 billion barrels, and reduce GHG emissions by 6 billion metric tons over the lifetime of the vehicles sold during model years 2011-2025. Under the Clean Air Act, EPA is authorized to establish certain standards for new motor vehicles and fuels to address air pollution that may reasonably be anticipated to endanger public health or welfare. On May 21, 2013, EPA proposed the Tier 3 standards, and on March 3, 2014, EPA announced the final Tier 3 standards which establish more stringent vehicle emission standards and reduce the sulfur content of gasoline. (Because EPA finalized this rulemaking after the draft report was completed and provided to agencies, the views of stakeholders and other information on Tier 3 that we reviewed and summarize in the rest of this report relate primarily to the proposed standards. EPA stated that the final rulemaking is very similar to the proposal, and that EPA made some changes—including to the sulfur provisions—based on public input.) According to EPA, more than 149 million Americans experience unhealthy levels of air pollution that has been linked to respiratory and cardiovascular problems and other adverse health effects. Cars and light trucks are significant contributors to air pollution, and EPA estimated that the Tier 3 standards will reduce pollution from such sources. The standards set more stringent tailpipe emission standards for new vehicles and generally require refiners to lower the sulfur content of gasoline from 30 parts per million (ppm) to 10 ppm on an annual average basis by 2017, among other things. According to EPA, reducing the sulfur content of gasoline allows emissions control systems to work more effectively for existing and new vehicles and would therefore enable more stringent vehicle emissions standards. EPA estimated that the Tier 3 standards would reduce on-highway vehicle emissions of nitrogen oxides, a pollutant that has been liked to respiratory illnesses, by 10 percent in 2018, and 25 percent in 2030. According to EPA estimates, by 2030, annual emission reductions from the Tier 3 standards would generate annual benefits of between $6.7 and $19 billion and prevent up to 2,000 premature deaths annually. EPA estimated that the vehicle and fuel standards would cost approximately $1.5 billion in 2030, including costs for refiners to install and operate equipment to remove sulfur from gasoline, as well as costs for vehicle manufacturers to improve the emissions performance of vehicles. Under the Clean Air Act, EPA is authorized to take certain steps to address emissions from stationary sources, including refineries. EPA has regulated certain emissions of air pollutants from stationary sources for several decades, and EPA recently issued rules concerning how GHGs are to be included in certain existing permitting processes. Specifically, permitting authorities are to include GHG emission control requirements in Prevention of Significant Deterioration (PSD) permits and certain other permits issued to refineries and other stationary sources that trigger GHG emissions thresholds. Since 2011, construction of any new refineries and certain refineries that are modified have generally been subject to the use of the “best available control technology” for GHG emissions. The best available control technology is determined for each facility based on an analysis of available technologies considering cost and other factors. According to EPA, in most cases, the best available control technology selected for GHGs are energy efficiency improvements. For example, for refineries, this could involve the installation of heat recovery units, which capture and use otherwise wasted heat in the refinery process. Such energy efficiency improvements can lower GHG emissions and other pollutants while reducing fuel consumption and saving money. Current regulations do not require existing facilities to take any steps to control GHG emissions unless they undertake a major modification. Examples of major modifications at a refinery include a significant expansion of crude oil processing units, or installing new secondary processing units that would increase GHG emissions above specified thresholds. California’s LCFS aims to lower GHG emissions by reducing the level of carbon in transportation fuels. Established by the California Air Resources Board (CARB) following state legislation and an executive order, the LCFS has been fully in effect since January 2011. The LCFS would change the mix of fuels and vehicles in California to reduce emissions throughout the fuel “life cycle”—which includes emissions associated with producing, transporting, distributing, and using the fuel. To reduce emissions, carbon intensity (CI) scores are used, which are to reflect each fuel’s life cycle GHG emissions. Refiners generally are required to ensure that the overall CI score for their fuels—which can include gasoline, diesel, and their blendstocks and substitutes—meets the annual carbon intensity target for a given year. Unlike the RFS, which requires certain types of renewable fuels be used, under LCFS refiners can meet the CI reduction targets using a variety of low carbon fuel technologies. Low carbon fuel technologies include renewable fuels from waste and cellulosic materials, natural gas, electricity used in plug-in vehicles, and hydrogen used in fuel cell vehicles. The original LCFS statewide reduction targets for gasoline, diesel, and their substitutes started at 0.25 percent of 2010 values in 2011 and increased to 10 percent by 2020. However, in 2013 a state Court of Appeal found that CARB must correct certain aspects of the procedures by which the LCFS was originally adopted. CARB officials subsequently announced a regulatory package would be proposed in 2014, and that the 2013 standards—a 1 percent decrease in carbon intensity from 2010 values—will remain in effect through 2014. To comply with LCFS, refiners can produce their own low carbon fuels, buy such fuels from other producers to blend into their products and sell on the market, or purchase credits generated by others. Refiners can also generate credits—which can be banked and traded—if their use of low carbon fuels results in greater-than-required carbon intensity reductions. CARB estimated that the 2020 targets would reduce GHG emissions associated with the transportation sector in California by 10 percent in 2020, or 23 million metric tons of carbon dioxide equivalent. Stakeholders we interviewed identified three major changes that have likely recently affected the domestic petroleum refining industry. First, crude oil production in the United States and Canada has increased, which has lowered the cost of purchasing crude oil for some refiners but poses some challenges related to crude oil transportation infrastructure constraints and the types of crude oils produced. Second, after many years of generally increasing domestic consumption of petroleum products, consumption has fallen since 2005, resulting in a smaller domestic market for refiners. Third, two key environmental regulations— CAFE and GHG vehicle emission standards and the RFS—have likely recently contributed to declining consumption of petroleum fuels, and compliance with the RFS has increased costs for some refiners. U.S. and Canadian crude oil production has increased in recent years, leading to lower crude oil costs for some refiners, according to several stakeholders we contacted. According to EIA data, U.S. production of crude oil reached its highest level in 1970 and generally declined through 2008, reaching a level of almost one-half of its peak. During this time, the United States increasingly relied on imported crude oil to meet growing domestic energy needs. However, recent improvements in technologies have allowed companies that develop petroleum resources to extract oil from shale formations that were previously considered to be inaccessible because traditional techniques did not yield sufficient amounts for economically viable production. In particular, the application of horizontal drilling techniques and hydraulic fracturing—a process that injects a combination of water, sand, and chemical additives under high pressure to create and maintain fractures in underground rock formations that allow oil and natural gas to flow—have increased U.S. crude oil and natural gas production. As shown in figure 7, monthly domestic crude oil production has increased by over 55 percent through September 2013 compared with average production in 2008. According to EIA, increases in production in 2012 and 2013 were the largest annual increases since the beginning of U.S. commercial crude oil production in 1859. Much of the increase in crude oil production has been from shale and other formations, such as the Bakken in North Dakota and the Eagle Ford in Texas, according to EIA data. Similarly, crude oil production in Canada— the largest foreign supplier of crude oil to the United States—has also increased significantly in recent years. From 2005 through 2012, total Canadian crude oil production increased by 32 percent and U.S. imports from Canada increased almost 50 percent. The rapid growth in U.S. and Canadian crude oil production has lowered the cost of crude oil for some domestic refiners that have the access and ability to process these crude oils. For example, West Texas Intermediate crude oil—a domestic crude oil used as a benchmark for pricing—was $17.60 per barrel less expensive in 2012 than Brent, an international benchmark crude oil from the European North Sea that was historically about the same price as West Texas Intermediate. Those refineries able to take advantage of these lower priced crude oils have benefited because crude oil costs are the largest cost for refiners. However, all refineries may not have been able to take advantage of these crude oils to the same extent for two key reasons: Transportation infrastructure challenges. The development of domestic and Canadian crude oil production has created some challenges for U.S. crude oil transportation infrastructure because some of the growth in production has been in areas with limited transportation linkages to refining centers. Most of the system of crude oil pipelines in the United States was constructed in the 1950s, 1960s, and 1970s to accommodate the needs of the refining sector and demand centers at that time. According to DOE officials, this infrastructure was designed primarily to move crude oil from the South to the North, but emerging crude oil production centers in Western Canada, Texas, and North Dakota have strained the existing pipeline infrastructure. Though pipeline capacity has increased—investments increased pipeline capacity to deliver crude oil to a key Cushing, Oklahoma hub by about 815,000 barrels per day from 2010 through 2013—EIA reported that it has been inadequate. Because of these challenges, some refineries may not have been able to take full advantage of crude oil production increases or had to rely on other more expensive crude oil transportation options such as truck, rail, or barge. For example, two of the refineries we visited recently installed facilities to enable them to receive crude oil from North Dakota or Canada by rail. According to EIA data, while refinery receipts of crude oil by these methods of transportation is a small percentage of total receipts, they have increased 57 percent from 2011 to 2012. Infrastructure constraints have, according to EIA, contributed to discounted prices for some domestic crude oils. Configuration constraints at refineries. Increasingly, the crude oil being produced in the United States and Canada has different characteristics from the crude oils that some domestic refineries are configured to use. Production of new domestic crude oil has tended to be light and sweet, whereas a portion of new Canadian production has been heavy and sour crude oils. To a certain extent, some refineries can use these crude oils, but some have invested in new equipment in order to do so. For example, representatives of one refiner told us they had invested over $2.2 billion in a project including a new coking unit at a refinery to refine heavier and more sour crude oil from Canada. After decades of generally increasing domestic consumption of petroleum products, consumption has declined since 2005, resulting in a smaller domestic market for refiners, according to several stakeholders we contacted. Overall, consumption of gasoline, diesel, and other petroleum products in the United States increased from 1983 through 2005. In 2007, EIA projected that U.S. consumption would increase by nearly 30 percent between 2005 and 2030. As we reported in late 2007, trends in domestic refining capacity had not kept pace with consumption growth, though it was unclear whether and for how long that market tightness would continue. However, as shown in figure 8, domestic consumption of petroleum products overall peaked in 2005 at 20.8 million barrels per day, and it declined by 11 percent through 2012. Consumption of gasoline, diesel, and jet fuel peaked in 2007 and declined by over 8 percent through 2012. More recent data indicate that these trends may now be starting to shift, as EIA estimated that petroleum product consumption increased in the first 11 months of 2013 compared with the first 11 months of 2012. According to several stakeholders we contacted and information we reviewed, a number of factors can affect consumption of petroleum products, including economic activity and crude oil and petroleum prices. For example, the recession of 2007 to 2009 reduced economic activity and demand for gasoline, and historically high gasoline prices have discouraged the use of gasoline. Stakeholders and information we reviewed also cited the effect of more stringent fuel economy standards and the RFS, which are discussed in the next section. Several stakeholders told us that this broad shift from growing to falling consumption of petroleum products has affected the domestic refining industry because it has resulted in a smaller domestic market. The U.S. market is important for domestic refineries because U.S. refiners have historically primarily sold their products domestically. On average, the United States exported almost 1 million barrels per day of domestic petroleum products from 2000 through 2005—less than 6 percent of U.S. refinery production. As discussed below, the refining industry has shifted sales to export markets amid a declining domestic market. According to stakeholders and the information we reviewed, two recently strengthened key environmental regulations—the coordinated CAFE and GHG vehicle emission standards, and the RFS—have likely affected the refining industry by reducing the consumption of petroleum fuels, and compliance with the RFS has recently increased costs for some refiners, as well as other challenges. The other three key environmental regulations we reviewed have had minimal effects to date because they have either not yet been implemented or have generally not affected the industry in a major way, according to several stakeholders and information we reviewed. According to information we reviewed and two stakeholders we contacted, CAFE and GHG vehicle emission standards have contributed to reductions in the consumption of petroleum fuels, but the extent is unclear. These standards aim to reduce oil consumption, and although they do not require changes at the refinery level, they can affect refineries indirectly by contributing to improvements in the overall efficiency of the vehicle fleet and, therefore, reducing fuels consumption. However, the National Academy of Sciences reported that it is difficult to isolate the effect of CAFE and GHG vehicle emissions standards from other factors that also affect consumption, such as higher fuel prices and changing driving habits. Stakeholders had different views on the extent to which CAFE and GHG vehicle emission standards have affected consumption of petroleum products. We reported, in 2007, that CAFE standards—along with higher fuel prices and other factors—contributed to a reduction in transportation fuel consumption of 2.8 million barrels per day in 2002. CAFE standards for cars largely did not change from 1990 through 2010, but they were strengthened beginning with model year 2011. According to EPA and DOE officials, since the standards did not change until recently, CAFE and GHG vehicle emissions standards did not cause the shift from growing consumption to declining consumption discussed previously. Regarding the strengthened standards, EPA estimated in 2010 that vehicles were expected to save 1.3 billion gallons of gasoline in 2013 compared with model year 2011 standards. This is equivalent to about 1 percent of EIA’s projection of gasoline consumption in 2013. A stakeholder told us that the CAFE and GHG vehicle emissions standards have likely had a relatively large impact on petroleum demand declines in the past few years, but it is unclear how much declining demand overall can be attributed to these standards versus other factors such as the recent economic recession and higher fuel prices. On the other hand, EPA and DOE officials, and a refinery representative told us that the most recent changes to CAFE and GHG vehicle emissions standards have had a marginal effect on petroleum demand so far. DOE officials also told us that the impact of the standards has been limited because they affect new car sales, and there are a relatively small number of new vehicles in the overall fleet. Several stakeholders we contacted and information we reviewed cited three main effects that the RFS has had on the domestic petroleum refining industry or individual refiners—compliance has increased costs, declining domestic consumption, and investment uncertainty. In addition, EPA has been late in issuing annual RFS standards, and several factors contribute to the delays. RFS Has Had Three Main Effects Stakeholders we contacted and information we reviewed identified three main ways the RFS has affected U.S. petroleum refiners: (1) compliance has recently increased costs for some refiners, (2) required blending of renewable fuels has contributed to declining domestic consumption of petroleum-based transportation fuels, and (3) EPA’s delays in issuing annual RFS standards may have contributed to investment uncertainty for some refiners. First, compliance with the RFS has recently increased costs for some refiners, according to information we reviewed and several stakeholders we contacted. Under the RFS regulations, refiners and other obligated parties are required to ensure U.S. transportation fuels include certain amounts of renewable fuels. To comply, refiners generally have two options—they can purchase and blend renewable fuels themselves, or they can pay others to blend or use renewable fuels by purchasing credits. These credits can be freely traded, and prices for credits are established based on the market and generally reflect the stringency of requirements and the costs of incorporating additional renewable fuels into the transportation fuel system to comply with the RFS—if costs increase, credits prices would tend to increase as well. According to EIA, corn-based ethanol credit prices were low—between $0.01 and $0.05 per gallon between 2006 and much of 2012—because it was generally economical to blend up to or above the level required by the RFS. However, in 2013, prices for these credits increased to over $1.40 per gallon in July before declining to about $0.20 per gallon as of mid- November 2013. Several stakeholders told us this increase in credit prices was primarily due to RFS requirements exceeding the capability of the transportation fuel infrastructure to distribute and the fleet of vehicles to use renewable fuels, referred to as the “blend wall.” A refiner we spoke with also attributed the decline in credit prices in the second half of 2013 to EPA’s statements expressing its desire to address the blend wall. We have previously reported on the blend wall and other challenges to the increasing use of renewable fuels. While the RFS applies to all refiners in the same way, the effect of the rise in credit prices may depend on each refiner’s situation. However, in comments to this report EPA stated that refiners experience the same compliance costs. As a result of higher costs, several stakeholders told us refiners could reduce production, produce more jet fuel, which is not subject to RFS requirements, or increase exports to nations where the RFS does not apply. (See app. III for more information about the blend wall, RFS credits, and views on how they have affected U.S. refiners.) Second, the RFS has contributed to the declining domestic consumption of petroleum-based transportation fuels. Under the RFS regulation, refiners and other obligated parties are required to ensure U.S. transportation fuels include certain amounts of renewable fuels. As a result, refiners and other industry participants have blended increasing amounts of renewable fuels. For example, consumption of ethanol has increased almost 8-fold since 2000, from 1.7 billion gallons in 2000 to 12.9 billion gallons in 2012. According to EIA, increased ethanol use since 2007 displaced over 4 billion gallons of petroleum-based gasoline, or about 3 percent of gasoline consumption in 2012. As discussed previously, decreases in consumption affect refiners by decreasing the size of the domestic market. Since the RFS was established in light of concerns about the nation’s dependence on imported crude oil, decreased consumption of petroleum products may further some of the objectives of the RFS. Third, the RFS has contributed to investment uncertainty for refiners according to several stakeholders because EPA has not issued annual RFS standards on time since 2009. Beginning in calendar year 2009 and through calendar year 2022, EPA is required to set annual blending percentages for total renewable fuels, advanced biofuels, cellulosic biofuels, and biomass-based diesel fuels by November 30 of the preceding calendar year. However, as shown in figure 9, EPA has missed the statutory deadline to set annual percentages since 2009. Most recently, EPA issued 2013 standards in August 2013—over 8 months late—and has not issued the 2014 standards. EPA proposed the 2014 standards on November 29, 2013, and EPA officials told us that they plan to finalize the standards in Spring 2014. The RFS compliance period—the time during which refiners and other parties incur obligations under RFS and can take steps to incorporate additional renewable fuels to create credits for compliance—is set by statute to be a full calendar year, and delays do not change this compliance period. As a result, when the RFS standards are issued late, the industry has less time to plan and budget effectively. Several representatives of refiners told us that delays in issuing annual RFS standards increase uncertainty for refiners and renewable fuel producers, making it more difficult to make long-term planning decisions. One refining company representative told us that the company has reduced capital investments due to uncertainty related to the RFS. In contrast, EPA officials told us that there is no indication that delays have caused significant problems for refiners. They also noted that delays could actually make annual standards more robust since EPA then has more data upon which to base decisions. Regulatory Development Processes Contribute to EPA Delays in Issuing RFS Standards EPA officials told us that delays in issuing RFS standards have largely been due to the length of the regulatory development process, which includes interagency and public reviews. Under the interagency review process, EPA is to follow certain procedures before publishing proposed or final regulations that establish annual RFS standards, including submitting draft proposed and final regulations to the Office of Management and Budget (OMB), which coordinates review of the draft regulations by other agencies, as well as conducting its own review. The interagency review process is to ensure that regulations are consistent with the President’s priorities, among other things, and that decisions made by one agency do not conflict with the policies or actions taken or planned by another. Under the public review process, EPA must publish a proposed standard in the Federal Register, provide the public with the opportunity to review and comment on the proposal, and address comments received before finalizing the regulation. According to EPA officials, the interagency and public review processes can be time consuming because the RFS standards involve complex and controversial issues and balance competing agricultural, energy, and environmental policy interests. In 2009, we recommended that EPA and other agencies track their performance for developing significant regulations against targeted milestones to identify opportunities for improvement. We found that monitoring actual versus estimated performance could help agency managers identify steps in the process that account for substantial time and provide information necessary to evaluate whether time was well spent. In this regard, EPA stated in comments to our 2009 report that it uses an agency-wide Action Development Process that tracks 14 milestones as it develops proposed rules and additional milestones in developing final regulations. For example, EPA tracks when its senior management approves of a document describing the scope of the regulation and the analytical work necessary to develop it, known as the detailed analytic blueprint. In comments to our 2009 report, EPA stated that it used an internal tracking system along with additional information to develop regulatory management reports to EPA managers and executives. EPA stated at the time that this process helps management identify regulations that are off-track so that corrective steps can be taken to expedite their completion. EPA officials told us that they develop RFS regulations using the same procedures used for developing all EPA regulations. However, even with these systems, EPA has not met its statutory deadlines under the RFS in the five annual standards since 2009. EPA has not conducted a systematic review of its experience issuing RFS regulations to identify the underlying causes of repeated delays and has not identified changes in its approach that may help to avoid these delays in the future. Without such analyses and a plan to address the underlying causes of the delays, EPA risks repeating them. The other key regulations that we reviewed—Tier 3 standards, stationary source GHG requirements, and LCFS—have had minimal effects to date because these regulations either have not yet been implemented (Tier 3 standards) or, with respect to the other two, have not affected industry operations or costs in a major way, according to stakeholders and information we reviewed. Specifically: Tier 3 standards. Tier 3 standards were proposed on May 21, 2013, and EPA announced final standards on March 3, 2014; therefore, they have had not had a direct effect on industry to date. Stationary source GHG requirements. Representatives of two refiners told us that stationary source GHG requirements have been burdensome to refiners; however, several other stakeholders told us they have not had a major effect, and EPA officials told us they were aware of only three refineries that have received major source GHG permits since the GHG permitting program was implemented in 2011. A refining company representative expressed concerns to us about the lengthy permitting process to authorize GHG emission increases. However, stationary source GHG requirements do not apply unless an existing refining facility proposes a major modification or a new refinery is proposed for construction. An EPA official said that, in most cases, best available control technologies selected to comply with GHG requirements for refining facilities involve energy efficiency improvement measures that could help refiners reduce fuel consumption and save money. Further, EPA officials also explained that, in some cases, delays can occur when the refinery applicant has not provided EPA with the proper information to proceed with processing the permit. The Clean Air Act requires that EPA approve or deny such permits within 12 months of receiving a complete application. LCFS. CARB, the entity responsible for implementing LCFS, said the regulation has had a modest effect to date—increasing fuel prices by about $0.01 per gallon. LCFS is the subject of several ongoing lawsuits, which resulted in a 4-month delay in some regulatory activities and uncertainty about the status of the regulation. According to a study conducted by a consultant on behalf of industry, the ongoing legal challenge to LCFS is creating uncertainty that discourages new investments by industry. A refining industry trade association representative told us many refiners that previously invested in new components for their California facilities to process heavy crude oils may not be able to make an adequate return on investment since the LCFS disincentivizes the use of carbon intensive heavy crude oils. However, a CARB official noted that LCFS does not specifically prohibit any crude oil from being processed in California refineries, but rather it ensures that the LCFS’ goal to reduce carbon intensity in transportation fuels is not affected by increased use of higher carbon intensity crude oils. Nevertheless, California refiners have thus far been able to comply with LCFS requirements by blending lower carbon intensive renewable fuels—such as Brazilian sugar-cane ethanol—or purchasing carbon credits as an alternative method of compliance. Stakeholders we contacted and information we reviewed generally suggest that the outlook of the U.S. refining industry depends on a number of factors, in particular: (1) future domestic consumption of petroleum products; (2) the extent to which key environmental regulations raise costs for domestic refiners; and (3) the extent to which domestic refiners will be able to export and compete in international markets. The outlook of the U.S. refining industry depends on future domestic consumption of petroleum products, which is uncertain, according to stakeholders we contacted and information we reviewed. As discussed above, domestic petroleum product consumption declined by 11 percent from 2005 through 2012, and forecasts we reviewed project consumption of three major petroleum products—gasoline, diesel, and jet fuel—will be stable to slightly increasing through 2020, but not returning to high levels of the past. Most of the scenarios in forecasts we reviewed from IHS and EIA project total consumption of gasoline, diesel, and jet fuel to increase slightly by 2020, with projections ranging from a decline of 2 percent to an increase of 7 percent compared with 2012 consumption. Expectations differ somewhat by fuel, with all EIA scenarios projecting gasoline consumption to decline or remain stable and diesel and jet fuel consumption to increase from 2012 to 2020 (see fig. 10). IHS projects an increase in the consumption of both gasoline and diesel, with more robust growth projected for diesel. Scenarios in the forecasts we reviewed generally project consumption to decline after 2020. Forecasts indicate that the level of future domestic consumption—the size of the domestic market for petroleum products—may affect future U.S. refinery production. In higher consumption scenarios, EIA’s projections suggest higher refinery production than in scenarios with lower domestic consumption. Specifically, EIA projects that inputs to refineries—which track trends in refinery production—in 2020 would be about 1 million barrels per day higher in scenarios with higher domestic consumption—a difference of about 7 percent of 2012 inputs. This difference is equivalent to about eight average-size U.S. refineries. Several stakeholders we contacted and information we reviewed highlighted various factors that can affect future domestic consumption levels and thereby the size of the largest market for the production of U.S. refineries, including the following: Economic growth. Faster economic growth tends to increase consumption, and EIA’s forecast scenario with higher economic growth assumptions projects greater future consumption of petroleum products than a scenario with low economic growth. Crude oil and petroleum product prices. Higher prices for crude oil and petroleum products tend to decrease consumption. For example, of the forecast scenarios we reviewed, the scenario that assumes high future oil prices projects lower domestic consumption of petroleum products. Shifts in consumer behavior and demographic trends. Changes in consumer behavior, such as reduced driving, along with demographic trends, such as an aging population and fewer young people with driving licenses, may reduce future consumption, according to EIA. Key regulations. Three of the key regulations we reviewed—CAFE and GHG vehicle emission standards, RFS, and LCFS—are expected to reduce domestic consumption of petroleum products in the future according to information we reviewed, though it is uncertain by how much. CAFE and GHG vehicle emission standards will require more efficient vehicles in the future, which may reduce future consumption of fuels. EPA estimated that the model year 2012-2025 standards are projected to reduce U.S. consumption of crude oil by 2.2 million barrels per day by 2025, equivalent to almost 15 percent of crude oil used by refineries in 2012. Similarly, under the RFS statute, unless waived by EPA, renewable fuels blending is required to double by 2022, which EPA estimated would reduce gasoline and diesel demand by 13.6 billion gallons, equivalent to about 10 percent of consumption in 2012. Furthermore, CARB projected that the LCFS would help decrease future gasoline consumption in California. However, the extent to which these regulations will reduce future consumption depends on actions by regulators and market and other developments. For example, as discussed above, EPA has proposed to reduce renewable fuel requirements for 2014 due to an inadequate supply in light of the blend wall and other issues. EPA stated that the framework it applied to determine the proposed percentage standards could be appropriate for later years. Therefore, the potential for RFS to reduce petroleum-based fuel consumption will depend on the percentages finalized by EPA, which, in turn will depend on the development of advanced renewable fuel sources and market infrastructure and could be affected by legal challenges, as well as any legislative action to modify the RFS. Stakeholders we contacted and information we reviewed generally suggest that the outlook of the U.S. refining industry will also depend on the extent to which some key regulations—RFS, Tier 3, stationary source GHG requirements, and LCFS—increase costs for refiners. In general, increasing costs for refiners may be absorbed by refiners themselves (i.e., by reducing their profits), be passed on to consumers through higher product prices, or both. The requirements on domestic refiners from the key regulations we reviewed generally are expected to collectively have a greater effect in the future, for example, by affecting more refiners (such as the stationary source GHG requirements to the extent that more refineries make modifications over time), or becoming more stringent in the future (such as the RFS), potentially increasing costs for refiners. In addition, several stakeholders told us that the uncertainty surrounding these regulations—and what costs they will impose—can affect the market climate within which refiners and other market participants make investment decisions such as whether to expand a refinery’s ability to process different crude oils, or to build new advanced biofuel processing facilities. Such uncertainty can discourage investments in the industry overall. RFS. RFS may increase costs for some refiners depending on the percentages of renewable fuels required by EPA and on other factors. As discussed previously, costs for some refiners to comply with RFS rose in 2013, which some of the stakeholders we contacted attributed to concerns about the blend wall. The blend wall may remain a concern into the future because statutory renewable fuel blending requirements continue to increase—they more than double from 2012 to 2022—while the consumption of petroleum products is expected to increase only slightly. Several stakeholders told us that the effect of the RFS depends in particular on how EPA addresses the blend wall in the annual standards it issues in the future. Furthermore, EPA’s timeliness in issuing the standards could also affect costs to the extent that delays affect the supply of renewable fuels, RIN prices, and refiners’ ability to plan and budget effectively for compliance. Several representatives of refiners told us that future delays would contribute to investment uncertainty and higher costs for refiners. EPA officials said that they did not believe delays have affected market participants, and that the market for RFS credits has provided flexibility to refiners and other obligated parties. The extent to which the RFS increases costs in the future could also be affected by the outcome of any relevant litigation and of legislative proposals to change the RFS or how EPA implements it. Tier 3. According to EPA, to meet the Tier 3 fuel sulfur standards, refiners would need to install or upgrade hydrotreating capacity or take other steps to reduce the sulfur content in fuels, which will likely increase industry-wide costs. EPA projected that 67 out of 108 refineries would modify or purchase some equipment, and the capital costs of installing this equipment and operating costs to run it would average about $0.0065 per gallon, and total $804 million in 2017. An industry study of the Tier 3 proposal estimated that the regulation would increase costs by up to $0.09 per gallon for the highest-cost refinery, and several refinery representatives told us that Tier 3 would increase their costs. The extent to which the refining industry will be affected by Tier 3 standards would have been greater had EPA decreased the per-gallon maximum allowable sulfur levels in gasoline—known as caps. In the final standards, EPA maintained the current 80 ppm cap, but had sought comment on whether it should decrease the cap to as low as 20 ppm. A stakeholder told us that Tier 3 would be manageable if EPA maintained the current caps, but far more difficult if the caps were lowered. One study we reviewed estimated that industry could incur additional capital expenses to achieve lower sulfur caps, ranging from $2 billion to over $6 billion dollars depending on the sulfur cap level in the final standards. Stationary source GHG requirements. Several of the refining industry representatives we contacted expressed concerns that stationary source GHG requirements could become more stringent in the future. The current permitting framework is a case-by-case determination that takes into account costs, among other factors, and places no requirement on existing refineries unless they undertake a major modification. However, EPA entered a settlement in which the agency agreed to develop national performance standards—called New Source Performance Standards—for GHG emissions from new and modified refineries, and GHG emissions guidelines for certain facilities at existing refineries. Although EPA has no current schedule to issue these standards, EPA committed in its settlement to issue them, and several stakeholders expressed concern that future standards could impose more strict controls involving higher costs at refineries. A stakeholder told us that until EPA clarifies its approach for NSPS, many refiners will be reluctant to make certain investments in their refineries out of concerns that their investments may be unprofitable given future requirements. In addition, some companies may preemptively factor in the cost of emissions control technologies in their investment analyses. LCFS. Two stakeholders and a refining industry trade association told us that California refiners could face higher costs or compliance challenges unless CARB adjusts future low carbon fuel requirements. CARB has estimated that the cost of LCFS on gasoline and gasoline- substitute fuels is likely to range between an increase of $0.09 per gallon, and a decrease of $0.13 per gallon by 2020. However, an industry study estimated that the LCFS could cost the refining industry an average of $0.70 per gallon by 2020. The study also projected that 5 to 7 of 14 California refineries could cease production by 2020, and the LCFS could raise other compliance challenges because of insufficient supplies or consumer uptake of cellulosic, Brazilian sugar- cane ethanol, and other low carbon intensity fuels or vehicle technologies. CARB officials told us that if it proves more difficult than expected to meet LCFS requirements, CARB could introduce cost containment provisions to increase the availability of credits, such as through a “safety valve” to release additional credits at a set price, or by providing extra credits to certain compliance approaches. Stakeholder told us that the decisions CARB makes with respect to the LCFS may affect California refiners’ ability to stay in business and compete with refiners in other states and countries. While the domestic refining industry has increasingly relied on export markets, stakeholders and forecasts we reviewed indicate that the industry’s future competitiveness is uncertain and that foreign markets present both challenges and opportunities for U.S. refiners. Forecasts and data we reviewed from EIA and IEA suggest that future domestic refinery production levels may depend on exports of petroleum products. Petroleum products are increasingly global commodities, and EIA data indicate that as domestic consumption has declined, refiners have looked to foreign markets to sell products. Since 1949, the United States had been a net importer of petroleum products, but this long-term trend reversed in 2011 when the United States became a net exporter of total petroleum products. According to EIA data, the United States recently exported more petroleum products than other leading exporters, including Russia, India, and Singapore, and petroleum product exports have represented an increasing share of U.S. refinery production. Exports of petroleum products represented 7 percent of refinery production in 2007 but increased to 17 percent in 2012. Major markets for U.S. exports include Central and South America, Mexico, and Europe, to which U.S. refiners sent nearly all diesel exports in 2012. The United States exports more diesel than gasoline, though U.S. refiners have been increasing exports of gasoline to Central and South America and Africa. Forecasts that we reviewed generally project that exports will remain strong. According to most of the EIA and IHS forecasts we reviewed, exports of petroleum products are expected to increase until 2015, but the extent of the increase is unclear. As shown in figure 11, EIA scenarios project export levels from 2.6 to 3.4 million barrels per day by 2020, a relatively wide range. Even the lowest projection for petroleum product exports in 2020 is above 2010 levels, indicating a general expectation that exports will remain strong. The extent to which domestic refiners are able to export their products will depend on the competitiveness of domestic refiners compared with foreign refiners, and stakeholders we contacted and information we reviewed highlighted both challenges that may inhibit competitiveness and opportunities that may increase it in the future. To sell products abroad, refiners need to be competitive—that is, they must be able to supply fuels that foreign purchasers want to buy at prices that are attractive. Stakeholders and information we reviewed suggest that various factors may affect the U.S. refining industry’s future competitiveness, including: (1) the balance between global refining capacity and global demand for petroleum products, (2) costs associated with environmental regulations, (3) exports to nations with stringent fuel standards, and (4) increasing domestic and Canadian crude oil production. More specifically: Balance between global refining capacity and demand. IEA data indicate that competition from foreign refiners may increase as global refining capacity is projected to exceed global consumption, creating an imbalance between global supply and demand that may affect U.S. refiners. According to IEA, global consumption of petroleum products was about 78.9 million barrels per day in 2012 and is projected to grow an additional 6 million barrels per day by 2020—with growth concentrated in Asia and the Middle East and consumption declining in Europe. But IEA projects refining capacity may grow even faster, resulting in excess capacity (refining capacity beyond that needed to meet consumption) of nearly 15.5 million barrels per day by 2020, in contrast to an estimated 4.8 million barrels per day of excess capacity in 2012. Excess refining capacity is likely to result in greater competition in foreign markets overall, and some regions may present particular challenges. A large share of the new capacity is expected in China, India, and the Middle East, and representatives of two refiners indicated concern that capacity additions in some of those regions may present competition for U.S. refiners. Several other stakeholders were optimistic about domestic refiners’ ability to compete in the future. According to IEA, capacity additions in China and India are intended to keep pace with growing consumption in those regions, but new Middle Eastern refineries are intended to be export facilities and may present increased competition to U.S. refiners. Costs associated with environmental regulations. As discussed above, the key environmental regulations we reviewed could collectively impose additional costs on the domestic refining industry. These costs could affect the industry’s ability to compete internationally to the extent that foreign refiners do not face similar costs. In addition, regulatory uncertainty can affect refiners’ competitiveness if it inhibits the industry from making investments that would otherwise lower costs. Not all of the key regulations we reviewed would be expected to affect the industry’s competitiveness. In particular, CAFE and GHG vehicle standards do not impose requirements on refiners. In addition, although RFS and Tier 3 standards could impose requirements with potential associated costs on some refiners, they would not apply to exported fuels—they apply only to fuels sold in the United States regardless of where they are produced. Potential for increased exports to nations with stringent fuel standards. In general, U.S. refineries are among the most sophisticated in the world and have generally been optimized to produce large proportions of cleaner-burning gasoline. IEA has pointed out that refiners in many parts of the world face challenges producing fuels that meet high product quality and environmental performance standards. Therefore, some U.S. refiners may benefit from any trend toward higher quality and more stringent environmental performance standards. In this regard, actions refiners may take to reduce gasoline sulfur to comply with proposed Tier 3 standards could enable them to export to markets—such as Japan and much of Europe—that already require low sulfur gasoline. On the other hand, representatives of a refiner pointed out that refiners could undertake such investments on their own—without Tier 3—if such exports were sufficiently economically attractive. Increasing domestic and Canadian crude oil production. As discussed above, increasing U.S. and Canadian crude oil production has led to lower cost crude oil for some refiners, providing a competitive advantage. All of the forecast scenarios we reviewed from EIA, IEA, and IHS anticipate increases in U.S. crude oil production, but the projections are uncertain and vary widely—from 6.8 to 9.8 million barrels per day in 2020 as shown in figure 12. Projections are revised each year, and expectations for U.S. crude oil production in 2020 have increased in more recent forecasts. For example, the reference scenario in EIA’s most recent forecast projects domestic crude oil production to approach a historical high of 9.6 million barrels per day in 2020, higher than the reference scenario from the prior year. Canadian crude oil production—which accounted for about 16 percent of the crude oil used by U.S. refineries in 2012—is expected to increase as well: the reference scenario in EIA’s international forecast projects that Canadian petroleum liquids production will increase more than 30 percent from 2012 levels, reaching approximately 5 million barrels per day in 2020. The extent of the increase in future crude oil production can have implications for future petroleum product exports. For example, EIA’s scenario that assumes more domestic crude oil and natural gas resources projects higher export levels than a scenario that assumes low crude oil and natural gas resources. Several stakeholders told us that various issues could mitigate U.S. refiners’ ability to take advantage of growing crude oil supplies. In particular, it is unclear whether planned expansions in pipelines and rail transportation will keep pace with growing production, and these infrastructure expansions could be affected by regulatory actions to address pipeline and rail safety. Similarly, several stakeholders told us that potential future increases in crude oil exports, which are currently minimal, could put pressure on regional crude oil prices, reducing the price advantage of U.S. refiners. The domestic petroleum refining industry has been and is expected to continue to be affected by several profound changes. Some of these changes, such as the growth in crude oil production in the United States and Canada, are reshaping the industry and creating new business opportunities. To take advantage of some of these opportunities, refiners and other market participants will need to invest—to upgrade refineries to be able to process different crude oils or to build pipelines or rail connections to move more crude oil from production to refining centers. Uncertainty can affect the market climate within which these investment decisions will be made. In this context, EPA’s timeliness in issuing annual percentage standards under the RFS is important to help inform the investment decisions of the refining industry. In issuing annual percentage standards, EPA may waive the statutory volumes in whole or in part according to statutory criteria, which EPA has identified as potentially factoring in the blend wall, market developments, and other issues. However, EPA has missed the annual deadline for issuing annual standards under the RFS in most years. EPA has some systems in place to monitor and evaluate progress in developing regulations, which could provide useful information for understanding delays in RFS. But EPA has not identified the underlying causes of delays, and it has not developed a plan to address delays and, therefore, risks repeating delays. EPA delays in issuing RFS standards are important because delays do not change refiners’ compliance periods accordingly and they therefore create uncertainty in the marketplace, potentially harming investment. Uncertainty among refiners, renewable fuel producers, and other market participants about how EPA will address the blend wall, which can be exacerbated by the prospect of litigation, can affect investment decisions and ultimately the availability and prices of the fuels they produce. To improve EPA’s ability to meet the annual statutory deadline for issuing annual RFS standards, we recommend that the Administrator of the EPA take the following two actions: Assess past experience to identify the underlying causes of delays in issuing annual RFS standards. Develop and implement a plan to address the causes of delays and help ensure RFS annual standards are issued on time. We provided drafts of this report to DOE, DOT, and EPA for review and comment. The three agencies provided technical comments on early or final drafts, which we incorporated as appropriate. EPA also provided a letter in which it generally agreed with our findings and recommendations and clarified three topics discussed in the report. First, regarding the effects of compliance with RFS, EPA asserted that refiners experience the same compliance costs regardless of whether they are fully integrated, with blending capabilities, or merchant refiners that purchase credits for compliance. Based on our work, we found the views of several stakeholders differed from EPA's. For example, in a 2011 study, DOE identified the degree to which a small refiner can actively blend production with renewable fuels is a large component that could contribute to economic hardship from compliance with the RFS. In theory, market-based compliance systems—such as the RFS credit system—provide incentives for market participants to make decisions that would tend to equalize additional compliance costs over time. However, there can be physical infrastructure or contractual constraints, among various other factors, that could result in different outcomes in the short run. We added additional language to explain EPA's views in the report and in Appendix III. Second, regarding the time-frame for RFS compliance, EPA stated that the RFS compliance deadline—the date by which refiners and other obligated parties must demonstrate compliance to EPA—is established through implementing regulations, not statute. EPA stated that it adjusted the 2013 deadline to provide additional time to demonstrate compliance. We acknowledge that EPA can extend the compliance deadline. However, the compliance period refers to the time during which refiners and other parties incur obligations under RFS and can take steps to incorporate additional renewable fuels to generate credits for compliance. This period is set by statute to be a full calendar year. We clarified language in the report to acknowledge EPA's ability to adjust the compliance deadline, essentially providing additional time for obligated parties to purchase credits, and its inability to adjust the compliance period. Third, regarding Tier 3 standards, EPA announced the final standards while our draft was with the agency for comment. EPA stated that the final Tier 3 program is very similar to what it proposed, though EPA made some changes based on public input and updated its analyses. EPA provided technical comments to incorporate information from the final rule which we incorporated into the report, as appropriate. However, we were not able to obtain stakeholder and other views on the final Tier 3 rule for this report. See appendix IV for EPA’s letter. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time we will send copies to the appropriate congressional committees and to the Secretaries of Energy and Transportation and the Administrator of the EPA. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or ruscof@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. This report provides information on the domestic petroleum refining industry and its market and regulatory environment. Specifically, it addresses what is known about (1) major changes—including key environmental regulations—that have recently affected the domestic petroleum refining industry and (2) major factors that may affect the future of the domestic petroleum refining industry—including its production, profitability, and competitiveness in foreign markets. To provide information on major changes that have affected the domestic petroleum refining industry and the future of the industry, we reviewed information including the following: studies by federal agencies and consultants, company financial regulatory filings, and proposed and final regulations and regulatory impact analyses. To identify studies and other literature, we conducted searches of various databases, such as ProQuest and PolicyFile, for studies since published 2009. We also asked agency officials and other stakeholders we contacted to recommend studies. Based on our research and information from stakeholders, we identified five key regulations that were recently strengthened or proposed: (1) the Environmental Protection Agency’s (EPA) Renewable Fuels Standard regulations, (2) the Department of Transportation’s Corporate Average Fuel Economy and EPA’s greenhouse gas vehicle emission standards; (3) EPA’s Tier 3 Motor Vehicle Emission and Fuel Standards; (4) EPA’s stationary source greenhouse gas requirements; and (5) the state of California’s Low Carbon Fuel Standard. We reviewed agency regulatory impact assessments and industry and other studies on the effect of these regulations on industry. Other regulations may also affect the industry. We also summarized the results of semistructured interviews with a nonprobability sample of 32 stakeholders. (See app. II for a list of these stakeholders.) Stakeholders included representatives from refining companies, environmental organizations, consultants, and officials from federal and state agencies. We also visited several refineries of selected refining companies. We selected these stakeholders to represent broad and differing perspectives on these issues based on recommendations from agencies and industry associations, along with other information. For example, to select refiners, we considered, among other factors, the size and location of their refineries, and whether they were vertically integrated or merchant refiners. When possible, we used a standard set of questions in interviewing stakeholders, including questions about the effect of the key regulations we reviewed. However, as needed, we also sought perspectives on additional questions tailored to these stakeholders’ expertise and sought opinions from stakeholders on key issues, such as their views on the potential effects of exports on industry. Because we used a nonprobability sample, the views of these stakeholders are not generalizable to all potential stakeholders, but they provide illustrative examples of the range of views. Similarly, the conditions at the refineries we visited are not generalizable to all refineries. The stakeholder views we summarize were not necessarily supported by all types of stakeholders, though we identify differing views where appropriate. Stakeholders and information we reviewed identified a number of changes that have affected the industry and a number of factors that may affect its future, and we report on those that were most often cited. To illustrate major changes over time and to describe the domestic petroleum refining industry, we summarized historical data from the Energy Information Administration (EIA) regarding such issues as capacity and location of refineries, crude oil production, and consumption of petroleum products. To assess the reliability of EIA data, we took several steps including reviewing documentation, interviewing EIA staff, and consulting with stakeholders. We determined the EIA data to be sufficiently reliable for the purposes of this report. To provide information about the future of the domestic petroleum refining industry and major factors that could affect it, we also reviewed forecasts from EIA, the International Energy Agency (IEA), and IHS, and summarized projections through 2020 under different scenarios. We selected these forecasts because they made projections through 2020, contained information broadly relevant to our report, covered multiple scenarios or offered a counterpoint scenario, and contained well- documented discussions of methodologies used and assumptions made. While forecasts are subject to inherent uncertainties, we found these forecasts to be reasonable for describing a range of views about potential conditions of the domestic refining industry and major factors that will help determine these conditions. We reviewed and compiled data from relevant scenarios and compared them where appropriate. Specifically, we reviewed all 27 scenarios in EIA’s 2013 forecast, the reference scenario in EIA’s 2014 initial forecast, and IHS’s forecast, and, in particular, highlight the scenarios representing the highest and lowest projection of gasoline, diesel, and jet fuel consumption; petroleum product exports; and crude oil production. We identified some differences in the metrics reported in the four forecasts and did not make direct comparisons in these instances. We conducted this performance audit from November 2012 to March 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. To demonstrate compliance with the Environmental Protection Agency’s (EPA) annual blending requirements under the Renewable Fuel Standard (RFS), refiners use renewable identification numbers (RIN), which we refer to in this report as credits. A RIN is a unique 38-character code that renewable fuel producers and importers assign to each gallon of renewable fuel produced or imported. To demonstrate compliance with the RFS, refiners and importers must provide sufficient RINs for the volume of gasoline and diesel they produce for use in the contiguous United States and Hawaii. For example, to comply with the 2013 total renewable fuels standard requiring that renewable fuels compose at least 9.74 percent of gasoline and diesel, a refiner selling 100 million gallons of gasoline would have to provide 9.74 million total RINs to EPA. Refiners can obtain RINs by purchasing and blending renewable fuels themselves, or they can purchase RINs from renewable fuel producers, importers, blenders, other refiners, or other RIN-holding entities. RINs are valid for the calendar year in which they were generated, and up to 20 percent of a year’s standard can be met with RINs from the previous year. Refiners and other obligated parties with more RINs than needed to meet the year’s blending standard can hold them for use in the following year or sell them to another party that needs additional RINs to comply with the blending standard. Prices for RINs reflect the cost of renewable fuels compared with the petroleum fuels they displace, the stringency of annual blending percentage standards, and other factors, and have varied over time. According to the Energy Information Administration (EIA), between 2006 and much of 2012, corn-based ethanol RIN prices were low—between $0.01 and $0.05 per gallon—because it was generally economical to blend up to or above the level required by the RFS. However, RIN prices for corn-based ethanol increased to over $1.40 per gallon in July 2013. Several stakeholders told us this increase in RIN prices was primarily due to RFS requirements exceeding the capability of the transportation fuel infrastructure to distribute and the fleet of vehicles to use renewable fuels, referred to as the “blend wall.” EPA officials told us that high corn prices, which made ethanol more expensive relative to gasoline, also contributed to higher RIN prices during this period. These RIN prices have since come down to about $0.20 per gallon as of mid-November 2013. A refiner attributed this decline to EPA’s statements expressing its desire to address the blend wall. The blend wall exists because blending more than 10 percent ethanol with gasoline (called E10) is affected by constraints such as the limited availability of vehicles that can use higher ethanol blends. In addition, higher ethanol blends are less widely available than E10 and must be priced at a discount to encourage greater consumption, according to EIA. EPA officials recently said the blend wall would be reached in 2014 when about 13.2 billion gallons of E10 could be consumed. Blending additional renewable fuels can be difficult and costly once the blend wall is reached because significant volumes of non-ethanol renewable fuels must be available, consumers must be encouraged to purchase additional higher blends of ethanol, and other market participants must develop the infrastructure to deliver those fuels. Compliance with the RFS has recently increased costs for some refiners, according to information we reviewed and several stakeholders we contacted. While the RFS applies to all refiners in the same way, effects of rising or falling RIN prices may vary depending on each refiner’s situation. For example, those refiners that have incorporated renewable fuel blending into their operations may have benefited from the rising prices relative to those refiners that are less well positioned. According to several stakeholders, RFS compliance has been most difficult for refiners with less of a retail presence, known as merchant refiners, because they do not blend their own fuel and must purchase RINs from others, increasing their cost of compliance. On the other hand, some industry participants may be relatively advantaged when the price of RINs rises. For example, an ExxonMobil official said that RIN costs did not have a significant impact on the company’s financial performance during the second quarter of 2013 because ExxonMobil meets the majority of its obligation by blending its renewable fuels itself. EPA officials told us, however, that the RFS program affects all refiners equally because obligations are the same regardless of whether refiners blend renewable fuels themselves or purchase RINs. In particular, EPA stated that refiners experience the same costs. If a company generates its own RINs, there is a cost associated with doing so, namely the cost for the renewable fuel compared to the petroleum fuel it displaces. In addition to the individual named above, Christine Kehr (Assistant Director), Elizabeth Beardsley, Catherine Bombico, Keya Chateauneuf, Nirmal Chaudhary, Quindi Franco, Cindy Gilbert, Katharine Kairys, Michael Kendix, Armetha Liles, and Alison O’Neill made key contributions to this report. Oil and Gas: Information on Shale Resources, Development, and Environmental and Public Health Risks. GAO-12-732. Washington, D.C.: September 5, 2012. Biofuels: Challenges to the Transportation, Sale, and Use of Intermediate Ethanol Blends. GAO-11-513. Washington, D.C.: June 3, 2011. Vehicle Fuel Economy: NHTSA and EPA’s Partnership for Setting Fuel Economy and Greenhouse Gas Emissions Standards Improved Analysis and Should Be Maintained. GAO-10-336. Washington, D.C.: February 25, 2010. Energy-Water Nexus: Many Uncertainties Remain about National and Regional Effects of Increased Biofuel Production on Water Resources. GAO-10-116. Washington, D.C.: November 30, 2009. Energy Markets: Estimates of the Effects of Mergers and Market Concentration on Wholesale Gasoline Prices. GAO-09-659. Washington, D.C.: June 12, 2009. Biofuels: Potential Effects and Challenges of Required Increases in Production and Use. GAO-09-446. Washington, D.C.: August 25, 2009. Federal Rulemaking: Improvements Needed to Monitoring and Evaluation of Rules Development as Well as to the Transparency of OMB Regulatory Reviews. GAO-09-205. Washington, D.C.: April 20, 2009. Energy Markets: Refinery Outages Can Impact Petroleum Product Prices, but No Federal Requirements to Report Outages Exist. GAO-09-87. Washington, D.C.: October 7, 2008. Energy Markets: Increasing Globalization of Petroleum Products Markets, Tightening Refining Demand and Supply Balance, and Other Trends Have Implications for U.S. Energy Supply, Prices, and Price Volatility. GAO-08-14. Washington, D.C.: December 20, 2007. Vehicle Fuel Economy: Reforming Fuel Economy Standards Could Help Reduce Oil Consumption by Cars and Light Trucks, and Other Options Could Complement These Standards. GAO-07-921. Washington, D.C.: August 2, 2007. Motor Fuels: Understanding the Factors That Influence the Retail Price of Gasoline. GAO-05-525SP. Washington, D.C.: May 2, 2005. | The U.S. petroleum refining industry—the largest refining industry in the world—experienced a period of high product prices and industry profits from the early 2000s through about 2007. Since the recession of 2007 to 2009, the industry has been in transition. Federal and state agencies regulate petroleum refining and the use of petroleum products to protect human health and the environment, as well as for other purposes. EPA, DOT, and California recently proposed or strengthened five key regulations, including EPA and DOT's coordinated fuel economy and GHG vehicle emission standards, and EPA's RFS, which has required that refiners and others ensure transportation fuels include increasing amounts of renewable fuels such as ethanol produced from corn. GAO was asked to provide information on the domestic petroleum refining industry. This report examines: (1) major changes that have recently affected the industry and (2) the future of the industry. GAO reviewed information including studies by agencies and consultants and company financial filings; interviewed stakeholders, including agency officials and representatives of refiners and environmental organizations; and reviewed forecasts by the Energy Information Administration and others. Stakeholders GAO contacted and information reviewed by GAO identified the following three major changes that have recently affected the domestic petroleum refining industry: Increased production. U.S. and Canadian crude oil production have increased, leading to lower costs of crude oil for some refiners. After generally declining for decades, monthly U.S. crude oil production increased over 55 percent compared with average production in 2008. Declining consumption. Domestic consumption of petroleum products declined by 11 percent from 2005 through 2012, resulting in a smaller domestic market for refiners. Key regulations. Two key regulations—the Environmental Protection Agency's (EPA) and Department of Transportation's (DOT) coordinated fuel economy and greenhouse gas (GHG) vehicle emission standards, as well as EPA's Renewable Fuel Standard (RFS)—have contributed to declining petroleum-based fuel consumption. For some refiners, compliance with the RFS increased costs in the first half of 2013, though costs have since declined to some degree from their peak. According to some stakeholders GAO contacted, this was primarily due to RFS requirements exceeding the capability of the transportation fuel infrastructure to distribute and the fleet of vehicles to use renewable fuels. Moreover, EPA has missed the statutory deadline to issue regulations establishing annual RFS blending standards since 2009. EPA has not systematically identified the underlying causes of these delays or changed its approach in order to avoid them. A late RFS contributes to industry uncertainty, which can increase costs because industry cannot plan and budget effectively, according to some stakeholders. Stakeholders GAO contacted and information reviewed generally suggested that the U.S. refining industry's outlook depends on the following factors: Domestic consumption. Future consumption of petroleum products is uncertain, with projections ranging from stable to slightly increasing through 2020 but not returning to consumption levels of the past. Forecasts GAO reviewed suggest higher future refinery production in scenarios with higher domestic consumption. Costs of key regulations . The extent to which requirements in the key regulations increase costs for refiners will affect the industry's outlook. For example, future costs to comply with RFS may depend on the annual renewable fuel volumes EPA sets and whether EPA issues annual RFS standards on time. In general, increasing costs may be absorbed by refiners (i.e., by reducing their profits), be passed on to consumers through higher prices, or both. Foreign markets. The U.S. refining industry has increasingly relied on foreign markets. Exports grew from 7 percent of production in 2007 to 17 percent in 2012. The extent to which domestic refiners export their products will depend on the competitiveness of U.S. refiners. Factors that may affect competitiveness include domestic environmental regulations, levels of U.S. and Canadian crude oil production, and the balance between global refining capacity and demand for petroleum products. GAO recommends that EPA identify the underlying causes of delays in issuing RFS standards and implement a plan to issue RFS standards on time. EPA generally agreed with GAO's findings and recommendations. |
The DDG 1000 program has from the onset faced a steep challenge framed by demanding mission requirements, stealth characteristics, and a desire to reduce manning levels by more than half that of predecessor destroyers. These requirements translated into significant technical and design challenges. Rather than introducing three or four new technologies (as is the case on previous surface combatants), DDG 1000 plans to use a revolutionary hull form and employ 11 cutting-edge technologies, including an array of weapons, highly capable sensors integrated into the sides of a deckhouse made primarily of composite material—not steel, and a power system designed for advanced propulsion as well as high-powered combat systems and ship service loads. This level of sophistication has necessitated a large software development effort—14 million to 16 million lines of code. All of this is to be accomplished while splitting construction between two shipyards. The Navy believes this approach and schedule is important to managing shipyard workloads, as starting later would have caused shipyard workload to drop too low. In a sense, then, the construction approach and schedule became an additional challenge as they became constraints on the pace of technology and design development. To meet these multiple and somewhat conflicting demands, the Navy structured its acquisition strategy to develop key systems and mature the design before starting to build the ship. While the Navy has made good decisions along the way to address risk, it is already likely, shortly before the Navy embarks on ship construction, that additional funding will be necessary or trade-offs will need to be made to develop and deliver DDG 1000 ships. Despite multiple and somewhat competing demands, the Navy conceived a thoughtful approach and achieved developmental successes on DDG 1000. Developing 10 prototypes of the ship’s critical systems helped to create confidence that a number of technologies would operate as intended, and the Navy’s plan to mature the ship’s design before starting construction aims to reduce the risk of costly design changes after steel has been cut and bulkheads built. For example, the Navy successfully demonstrated the advanced gun system through initial guided flight and testing on land. In other cases, such as for the integrated power system, tests brought to light technical problems, which the Navy was able to address by going to an alternate technology. However, notwithstanding these efforts, significant challenges remain in developing the ship’s design and a number of key components—in particular, the deckhouse, volume search radar, and the integrated power system. Moreover, the ship’s capability is contingent on an unprecedented software development effort. Recently, the Navy restructured the schedule to buy more time for development—a good decision. However, as construction of the first ship has not yet begun, the Navy may have exhausted its options for solving future problems without adding money and time. Although the initial phases of the design are complete, the shipbuilders will be pressed to complete a large amount of design work by October 2008 when lead ship construction begins. From August 2007 through May 2008, the shipbuilders finished work on 16 of the 100 design zones (individual units that make up the ship’s design) leaving 5 months to finish the final design phases in 84 zones leading up to the start of construction. While the shipbuilders believe they can finish the design by the start of ship construction, delays in the development of the ship’s key systems could impede completion of the design and eventually interfere with DDG 1000 construction. If the shipbuilders cannot finish planned design work prior to the start of lead-ship construction, the program is at greater risk for costly rework and out-of-sequence work during construction. To maintain the start of ship construction in 2008 while continuing to develop the ship’s technologies, the Navy recently realigned the program’s schedule. Rather than delivering a fully mission-capable ship, the Navy will take ownership of just the vessel and its mechanical and electrical systems—including the ship’s power system—in April 2013. At that point, the Navy plans to have completed “light-off” of the power, mechanical, and electrical systems. Light-off refers to activating and testing these systems aboard ship. The Navy deferred light-off of the combat systems—which include the radars, guns, and the missile launch systems—by over 2 years until May 2013. According to the Navy, conducting light-off in phases allows the program to test and verify the ship’s major systems, in particular the integrated power system, in isolation and creates additional time to mature the combat systems, as well as the software that supports these systems, before ship installation and shipboard testing. However, since the Navy will only test and inspect the hull prior to taking ownership of the vessel, it will not have a full understanding of how the ship operates as a complete and integrated system until after final shipboard testing of the combat systems in 2014. While the restructure maintains the construction schedule, it does delay verifying the performance of the integrated power system before producing and installing it on the ship. Tests of a complete integrated power system with the control system will not occur until 2011—nearly 3 years later than planned. To meet the shipyard’s schedule, the Navy will buy a power system intended for the third ship and use it in land-based tests. As a result, the integrated power system will not be demonstrated until a year after the power systems have been produced and installed on the two lead ships—an approach that increases exposure to cost and schedule risk in production. Finalizing deckhouse manufacturing and assembly processes are essential to constructing and delivering the deckhouse as planned. Changes to the manufacturing processes for deckhouse production are ongoing. The shipbuilder is validating process changes through production and inspection of a series of test units, culminating with a large-scale prototype manufactured to the same thickness and other specifications of the deckhouse. Final validation of the manufacturing processes for deckhouse construction will not occur until after construction, inspection, and shock testing of the large-scale prototype. However, test and inspection activities are not scheduled for completion until after the deckhouse production readiness review in September 2008. Problems discovered during testing and inspection may require additional changes to manufacturing methods. Moreover, facility and machinery upgrades necessary to construct and assemble the deckhouse are not all scheduled to be complete until March 2010—over a year after the start of construction of the first deckhouse. While the shipbuilder expects to complete efforts to meet the construction schedule, if difficulties occur, the deckhouses may not be delivered to the shipyards on time, disrupting the construction sequence of the ships. Further, the volume search radar (one of two radars in the dual band radar system) will not be installed during deckhouse construction as initially planned. Instead, installation will occur at the shipyard when the first ship is already afloat, a more costly approach. The change was partly due to delays in developing the volume search radar. Land-based demonstrations of the volume search radar prototype originally planned to be done before starting ship construction will not be completed until 2009—almost 2 years later. Development difficulties center on the radar’s radome and transmit- receive units. The contractor has been unable to successfully manufacture the radome (a composite shield of exceptional size and complexity), and the transmit-receive units (the radar’s individual radiating elements) have experienced failures operating at the voltage needed to meet range requirements. While the Navy believes that the voltage problem has been resolved, upcoming land-based tests will be conducted at a lower voltage—and without the radome. The Navy will not demonstrate a fully capable radar at its required power output until after testing of the first production unit sometime before combat systems light-off in 2013. Crucial to realizing DDG 1000’s required manning reductions is the ability to achieve a high degree of computer automation. If the ship’s software does not work as intended, crew size would need to be increased to make up for any lack of automation. Given the risks associated with the ship’s software system, referred to as the total ship computing environment, the Navy initially planned to develop and demonstrate all software functionality (phased over six releases and one spiral) over 1 year before ship light-off. As a result of changes in the software development schedule, the Navy eliminated this margin. Until recently, the Navy was able to keep pace with its development schedule, successfully completing the first three software releases. However, the Navy is now entering the complex phases of software development when ship functionality is introduced. The Navy certified release 4 without the release meeting about half of the software system requirements, mainly because of issues coding the ship’s command and control component—the heart of the ship’s decision-making suite. Problems discovered in this release, coupled with the deferred work, may signify larger software issues that could disrupt the development of releases 5 and 6 and prevent the timely delivery of software to meet the ship’s schedule. Costs of the DDG 1000 ships are likely to exceed current budgets. If costs grow during lead ship construction due to technology, design, and construction risks, as experience shows is likely, remaining funds may not be sufficient to buy key components and pay for other work not yet under contract. Despite a significant investment in the lead ships, the remaining budget is likely insufficient to pay for all the effort necessary to make the ships operational. The Navy estimates a total shipbuilding budget of $6.3 billion for the lead ships. Of this amount, the Navy has approximately $363 million remaining in unobligated funds to cover its outstanding costs and to manage any cost growth for the two lead ships, but known obligations for the lead ships, assuming no cost growth during construction, range from $349 million to $852 million (see table 1). The main discrepancy is the current estimated cost of the combat systems. In order to create a cash reserve to pay for any cost increases that may occur during construction of the lead ships, the Navy has deferred contracting and funding work associated with conducting shipboard testing of the combat systems—and in some cases has also delayed purchasing and installing essential ship systems until later in the construction sequence. The Navy has estimated the cost of these combat systems to be around $200 million, while the contractor’s estimate is over $760 million. If the agreed-on cost approaches the contractor’s estimate, the Navy will not have enough in its remaining funds to cover the cost. There is little margin in the budget to pay for any unknown cost. To ensure that there was enough funding available in the budget to cover the costs of building the lead ships, the Navy negotiated contracts with the shipbuilders that shifted costs or removed planned work from the scope of lead ship construction and reduced the risk contingency in the shipbuilders’ initial proposals. For example, the Navy stated that it shifted in excess of $100 million associated with fabrication of the peripheral vertical launch system from the scope of ship construction and funded this work separately using research and development funding. As a result, this work is no longer included in the $6.3 billion end cost to construct DDG 1000. To the extent that the lead ships experience cost growth beyond what is already known, more funding will be needed to produce operational ships. However, these problems will not surface until well after the shipyards have begun construction of the lead ships. Cost growth during construction for lead ships has historically been about 27 percent, and an independent estimate by the Department of Defense already projects the cost of the two lead ships to be $878 million higher than the Navy’s budget. With ships as expensive as DDG 1000, even a small percentage of cost growth could lead to the need for hundreds of millions of dollars in additional funding. The challenges facing DDG 1000 are not unique among Navy shipbuilding programs nor to Department of Defense acquisition programs at large. Across the shipbuilding portfolio, the Navy has not been able to execute programs within cost and schedule estimates, which has, in turn, led to disruptions in its long-range construction plans. This outcome has largely resulted from Navy decisions to move ships forward into construction with considerable uncertainties—like immature technologies and unstable designs. However, by doing so the Navy has effectively eroded its buying power by forcing it to make near-term quantity reductions within its shipbuilding plan. Because fleet requirements remain steady at 313 ships, the Navy must compensate for near term construction deferrals by increasing ship construction in the out-years. Achieving this plan, however, will require significant funding increases in the future, which will likely be difficult to obtain. These near term trade-offs could have long- term consequences for balancing mission, presence, industrial base, and manning tensions. For example, if ship quantities are deferred to the future to accommodate near-term cost growth, the Navy could be trading off presence and industrial base if additional funds do not materialize in the future. Cost growth and schedule delays are persistent problems for shipbuilding programs as they are for other weapon systems. These challenges are amplified for lead ships in a class (see figs. 1 and 2). The Navy’s six most recent lead ships have experienced cumulative cost growth over $2.4 billion above their initial budgets. These cost challenges have been accompanied by delays in delivering capability totaling 97 months across these new classes. The first San Antonio-class ship (LPD 17) was delivered to the warfighter incomplete and with numerous mechanical failures—52 months late and at a cost of over $800 million above its initial budget. For the LCS program, the Navy established a $220 million cost target and 2-year construction cycle for each of the two lead ships. To date, costs for these two ships have exceeded $1 billion, and initial capability has been delayed by 21 months. Cost increases are also significant if the second ship is assembled at a different shipyard than the first ship. This was the case with SSN 775, with cost growth of well over $500 million. These outcomes result from the Navy consistently framing its shipbuilding programs around unexecutable business cases, whereby ship designs seek to accommodate immature technologies and design stability is not achieved until late in production. New ship programs have moved forward through milestones, whether or not desired knowledge had been attained. In turn, initial ships in Navy programs require costly, time-consuming out- of-sequence work and rework during construction, and undesired capability trade-offs are often required. In essence, execution problems are built into the initial strategy for a new ship, as the scope of the ship— that is, the innovative content and complexity owing to multiple mission requirements—overmatches the time and money set aside to develop and construct the ship. For example, while the scope of the DDG 1000 and CVN 78 ships were driven by mission requirements, the schedules for these ships was set by shipyard workload needs or by the retirement schedule of a predecessor ship. The result is the scope of work is compressed into a schedule that is based on something else. LCS is a recent example. In this program, the Navy sought to concurrently design and construct two lead ships in an effort to rapidly meet pressing needs in the mine countermeasures, antisubmarine warfare, and surface warfare mission areas. However, changes to Navy requirements required redesign of major elements in both lead ships to provide enhanced survivability, even after construction had begun on the first ship. While these requirements changes improved the robustness of LCS designs, they contributed to out-of-sequence work, rework, and weight increases on the lead ships. These difficulties caused LCS construction costs to grow and delivery schedules to be extended and prompted the Navy to reduce speed requirements for the class due to degraded hydrodynamic performance. In turn, the Navy canceled construction contracts for the third and fourth ships and used funds from other previously appropriated ships to pay for lead ship cost growth. Although these steps increased the resources available to the two lead ships, continuing technology immaturity and unproven watercraft launch and recovery systems included within each design could trigger additional cost growth and schedule delays above and beyond current estimates. The Ford-class aircraft carrier (CVN 78) also faces uncertainty related to its cost and schedule estimates and eventual capability. The business case for CVN 78 is framed around delivering the carrier to maintain the Navy’s force of 11 operational carriers given the impending retirement of USS Enterprise (CVN 65), but includes a cost target that leaves little if any margin for error. As construction begins, remaining technology risk in the program—particularly with the electromagnetic aircraft launch system (EMALS)—has positioned the program to face future construction challenges similar to other lead ships. Previously, the Navy planned to demonstrate full functionality of a ship-ready system prior to production and installation on CVN 78—an approach aimed at reducing risk to ship construction. However, the contractor encountered technical difficulties developing the prototype generator and meeting detailed Navy requirements which left no margin in the schedule to accommodate unanticipated problems discovered in testing or production. In order to maintain the ship’s construction schedule, the Navy adopted a test and production strategy that will test, produce, and ultimately install EMALS with a high degree of concurrency. At the same time test events are occurring, the Navy will authorize and begin production of EMALS intended for ship installation. While Navy officials recognize that concurrency is undesirable, they believe it is the only way to meet the ship’s delivery date in September 2015. However, by moving ahead with production in order to accommodate schedule milestones, CVN 78 is at risk of cost growth and ultimately schedule changes if unexpected problems arise in EMALS testing. Since 2006, the Navy has annually issued a long-range plan for shipbuilding. These plans outline expected new ship procurements 30 years into the future and the funding the Navy estimates will be needed to support those procurements. The long-range plan is predicated upon the stated fleet need for 313 ships. However, mounting cost and schedule challenges in current programs have required the Navy to increasingly reshape its long-range ship procurement plans, placing the 313 ship goal in jeopardy. The Navy’s long-range ship construction plan embodies multiple objectives including building sophisticated ships to support new and existing missions, improving presence by increasing the numbers of ships available to execute these missions, designing ships and operating concepts that reduce manning supplying construction workloads that stabilize the industrial base. There is an inherent tension among the multiple objectives in the plan that is depicted in simple form in figure 3. This tension can play out in several ways. If, for example, a class of ship is expected to perform multiple challenging missions, it will have sophisticated subsystems and costs will be high. The cost of the ship may prevent its being built in desired numbers, subsequently reducing presence and reducing work for the industrial base. Requirements to reduce manning can actually add sophistication if mission requirements are not reduced. To some extent, this has happened with DDG 1000 as decisions have tended to trade quantities (that affect presence and industrial base) in favor of sophistication. Several years ago, the program was expected to deliver 32 ships at an approximate unit cost of $1 billion. Over time, sophistication and cost of the ship grew as manning levels lower than current destroyers were maintained. Today, the lead ships are expected to cost $8.9 billion in research and development funding and another $6.3 billion to build. Similarly, cost growth in the LCS program has precluded producing ships at the rate originally anticipated, and it is possible the Navy will never regain the recent ships it traded off to save cost. Had the Navy anticipated that LCS lead ship costs would more than double, it may have altered its commitment to the program within its previous long-range shipbuilding plan. The Navy’s fiscal year 2009 long-range ship construction plan reflects many of the recent challenges that have confronted Navy shipbuilding programs. The plan provides for fewer ships at a higher unit cost—in both the near term and the long term—from what the Navy outlined in its fiscal year 2008 plan. Across the next 5 years, the Navy now expects to fund construction of 47 new ships at a cost of almost $74 billion. However, only 1 year ago the Navy expected to purchase 60 ships at a cost of $75 billion during this same time span. Instead, as cost growth has mounted in current shipbuilding programs, the Navy has had to reallocate funds planned for future ships to pay for ones currently under construction. These problems have also required the Navy to adjust its long-term plans. To compensate for its recent near-term quantity reductions, the Navy now plans to increase construction rates starting in fiscal year 2014. This strategy is based upon the premise that increased funding—on the order of $22 billion between fiscal years 2014 and 2018—will become available to support its plans. The Navy assumes this trend of increased funding— above and beyond annual adjustments for inflation—will continue through the end of its plan, which culminates in fiscal year 2038. Cost and schedule pressures in current programs have also led the Navy to make a number of operational trade-offs to help maintain the viability of its shipbuilding goals. For instance, the Navy’s current long-range plan includes a new provision to extend the service lives of current DDG 51 ships by 5 years to maintain an adequate number of surface combatants in its fleet. In addition, the Navy plans to extend the service life of selected attack submarines as well as the length of attack submarine deployments. These actions, however, will require the Navy to increase funding for future upgrades, modernization programs, and maintenance for these vessels—from sources the long-range plan does not identify. The discussion over whether to conclude the DDG 1000 program at two ships should prompt some introspection given that over $13 billion has been spent. In a sense, some of the key factors influencing the discussion—such as the high cost of the ship, the potential for cost growth, and the questionable affordability of the 30-year shipbuilding plan—are not markedly different from what they were a few years ago. Future success in shipbuilding depends on understanding why the weaknesses in the DDG 1000 business case, which now seem to threaten the program, did not prompt a similar re-examination several years ago. I believe that Navy managers and shipbuilders have enough knowledge about cost estimating, technology development, engineering, and construction to develop more executable business cases for new ships— that is, a better match between the scope of the ship and the time and money allotted for delivering it. The fact remains that we do not get these matches when they really count—before detail design and construction for a new ship are approved. So, the question is, why are well-understood elements of success not incorporated into new ship programs? Part of the answer is that while managers may know what it takes to put an executable business case together, compromises in judgment have to be made to bring the business case in conformance with competing demands. For example, in a program like the DDG 1000 that undertook multiple technical leaps to meet challenging requirements, yet also had to deliver in time to match shipyard availability, pressures existed to make optimistic assumptions about the pace of technology maturity. At the same time, budget constraints exert pressure on cost estimates to be lower. These demands do not all fall just within the province of the Navy— industry, Congress, and the Office of the Secretary of Defense all play important roles. Over time, the business case for DDG 1000 eroded. The primary mission of DDG 1000—and the foundation for its business case— was land attack. Yet, subsequent decisions ultimately forced trade-offs in that mission. For example, while including features like a more sophisticated radar and stealth characteristics may be good decisions individually, collectively they made the ship more expensive. Efforts to contain cost involved both reducing the quantity of ships and the actual land attack capability possessed by each individual ship. Ironically, the advanced gun system, which was the primary land attack weapon of the ship and a technical success to date, will now not have a platform to operate from beyond the first two DDG 1000s. The reconsideration of the DDG 1000 buy reflects poorly on the requirements, acquisition, and funding processes that produced the ship’s business case. Unless some attempt is made to examine the root causes of decisions that hope for the best and result in poor outcomes, shipbuilding programs seem destined to the same fate: despite the best efforts to manage, the scope of the program will outstrip the cost and schedule budget. This examination must begin with an honest self-appraisal of what each player in the shipbuilding acquisition process demands of programs in terms of requirements, technologies, design, industrial base, quantities, and cost. Otherwise, while cost and other problems of current ships are lamented, these same problems could continue to curb the outcomes of future programs like the potentially sophisticated next-generation cruiser (CG(X)) or even renewed construction of DDG 51. Mr. Chairman, that concludes my statement. I would be pleased to answer any questions. To develop information on the status of the DDG 1000 program, we relied largely on our current work examining the DDG 1000 program, as well as a number of prior GAO products on shipbuilding programs. We supplemented this work with analysis of the Navy’s most recent and previous long-range plan for ship construction and Selected Acquisition Reports for current Navy ships. Finally, we updated our estimates of lead ships costs through the use of the Navy’s budget justification documentation. For future questions about this statement, please contact me at (202) 512- 4841 or francisp@gao.gov. Individuals making key contributions to this statement include Marie P. Ahearn, Christopher R. Durbin, Brian Egger, James Madar, Diana Moldafsky, Gwyneth B. Woolwine, and Karen Zuckerstein. Defense Acquisitions: Cost to Deliver Zumwalt-Class Destroyers Likely to Exceed Budget. GAO-08-804. Washington, D.C.: July 31, 2008. Defense Acquisitions: Assessments of Selected Weapon Programs. GAO- 08-467SP. Washington, D.C.: March 31, 2008. Defense Acquisitions: Overcoming Challenges Key to Capitalizing on Mine Countermeasures Capabilities. GAO-08-13. Washington, D.C.: October 12, 2007. Defense Acquisitions: Realistic Business Cases Needed to Execute Navy Shipbuilding Programs. GAO-07-943T. Washington, D.C.: July 24, 2007 Defense Acquisitions: Navy Faces Challenges Constructing the Aircraft Carrier Gerald R. Ford within Budget. GAO-07-866. Washington D.C.: August 23, 2007. Defense Acquisitions: Challenges Remain in Developing Capabilities in Naval Surface Fire Support. GAO-07-115. Washington, D.C.: November 30, 2006. Defense Acquisitions: Challenges Associated with the Navy’s Long-Range Shipbuilding Plan. GAO-06-587T. Washington, D.C.: March 30, 2006. Defense Acquisitions: Progress and Challenges Facing the DD(X) Surface Combatant Program. GAO-05-924T. Washington, D.C.: July 19, 2005. Defense Acquisitions: Plans Need to Allow Enough Time to Demonstrate Capability of First Littoral Combat Ships. GAO-05-255. Washington, D.C.: March 1, 2005. Defense Acquisitions: Improved Management Practices Could Help Minimize Cost Growth in Navy Shipbuilding Programs. GAO-05-183. Washington, D.C.: February 28, 2005. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The U.S. Navy is about to begin construction of the first Zumwalt-class destroyer (DDG 1000) amid considerable uncertainties and a high likelihood of cost and schedule growth. Significant cost growth and schedule delays are persistent problems that continue to compromise the Navy's shipbuilding goals. This testimony focuses on (1) the challenges faced by the DDG 1000 program and (2) the strain such challenges portend for long term shipbuilding plans. From the outset, DDG 1000 has faced a steep challenge framed by technical sophistication, demanding mission requirements, and a cost and schedule budget with little margin for error. The Navy has worked hard to manage the program within these competing goals. Yet recently, the Navy has discussed canceling construction of the remaining five DDG 1000 ships. Although a cancellation may stem from fiscal necessity, it reflects poorly on the acquisition, requirements, and funding processes that produced the DDG 1000 business case. Future success in shipbuilding depends on understanding why the weaknesses in the DDG 1000 business case, which now seem to threaten the program, did not prompt a similar re-examination several years ago. The current program of record faces significant execution risks. The Navy will be pressed to complete a large amount of design work in time for the start of construction in October 2008. Demonstration of key components--particularly, the deckhouse, the volume search radar, and the integrated power system--have fallen behind. Despite restructuring the construction schedule, margins between several major events are gone. For example, land-based tests of the integrated power system are now scheduled after installation on the lead ships. Software development has also proven challenging; the Navy certified the most recent software release before it met about half of its requirements. Further, the full costs of constructing the two lead ships have not been entirely recognized or funded. The complexity and unique features of DDG 1000, along with the design work, testing, and actual construction experience to come, make cost growth beyond budgeted amounts likely. The challenges confronted by DDG 1000 are not unique. Across the shipbuilding portfolio, executing programs within cost and schedule estimates remains problematic, largely because of unexecutable business cases that allow programs to start with a mismatch between scope and resources. Collectively, problems in individual programs erode the buying power of the Navy's long-range construction budget. The Navy compensates for near-term construction deferrals by increasing construction in the out-years, but this will require significant funding increases in the future, which are unlikely. Near-term tradeoffs could have long-term consequences for maintaining a rational balance between mission capability, presence, industrial base, and manning. The Navy's consideration of cutting the DDG 1000 program back comes after over 10 years of development and $13 billion have been invested. Clearly, changes are needed in how programs are conceptualized and approved. Although the elements needed for success are well known, unrealistic compromises are made to make business cases conform to competing demands. An examination of the root causes of unexecutable business cases must be done or shipbuilding programs will continue to produce unsatisfactory outcomes. This examination must begin with an honest appraisal of the competing demands made on new programs early in the acquisition process and how to strike a better balance between them. |
In 1965, Medicaid was established as a jointly funded federal and state program providing medical assistance to qualified low-income people. At the federal level, the program is administered by HCFA, an agency within the Department of Health and Human Services (HHS). Within a broad legal framework, each state designs and administers its own Medicaid program. States decide how much to reimburse providers for each service and whether to cover optional services, such as eyeglasses and dental care. The federal and state governments share in the cost of Medicaid, with the federal government paying at least 50 percent and not more than 83 percent of a state’s costs, as determined by a formula. This formula considers a state’s average per capita income relative to the national per capita income and is intended to reduce differences among the states in medical care coverage to the poor and to distribute the burden of financing program benefits fairly among the states. The formula-derived match rate is called the federal medical assistance percentage. In fiscal year 1997, the federal government share averaged about 57 percent of Medicaid expenditures. Besides making payments to medical providers for services rendered, states are required to make additional Medicaid payments (DSH payments) to hospitals that serve large numbers of Medicaid and other low-income patients. Within federal guidelines, states may designate disproportionate share hospitals but must include hospitals with high utilization rates for Medicaid or low-income patients. Hospitals must receive DSH payments if their Medicaid utilization rate is at least one standard deviation greater than the average for hospitals participating in Medicaid or if their low-income utilization rate exceeds 25 percent. States may designate other hospitals to receive DSH funding if the hospital’s Medicaid utilization rate is at least 1 percent of its total bed days. Total DSH allocations to states are limited by federal formula, and within states, payments to individual hospitals are limited to the costs of uncompensated care that hospitals provide plus the shortfall between costs and payments for care of Medicaid patients. In addition to designating certain hospitals to receive DSH payments, federal rules give states three options for setting minimum DSH payments. Within these limits, states have broad discretion when determining the size of Medicaid DSH payments to individual hospitals. The creative financing mechanisms that states began using in the mid-1980s to maximize federal Medicaid contributions without effectively committing their own share of matching funds took various forms. One involved using provider-specific tax revenue or provider donations to fund a state’s share of a later Medicaid payment to the providers. For example, hospitals might have paid $50 million in taxes or provider donations to the state. The state, in turn, made $60 million in payments to hospitals. The state received federal matching funds based on the Medicaid expenditure of $60 million. If the state had a 50-percent matching rate, it received $30 million of federal funds. Because the state received $80 million in revenue ($50 million from hospitals and $30 million from the federal government) and made $60 million in payments, it had a net gain of $20 million. Also the hospitals received a net increase in revenues of $10 million, entirely from federal dollars. States also benefited when they used their own funds to initiate payments to public providers. Under this financing mechanism, states generated federal matching funds by increasing payment rates for a particular group of public providers, such as nursing homes, public hospitals, or state psychiatric hospitals. However, these providers, through the use of intergovernmental transfers, returned all or the majority of federal and state funds to state treasuries. Federal legislation in 1991 and 1993 essentially banned provider donations, required that provider taxes be broad based, limited provider taxes to 25 percent of a state’s share of Medicaid expenditures, and prevented states from repaying provider taxes. Also, the legislation placed a cap on a state’s total DSH payments and limited such payments to 100 percent of a hospital’s unrecovered costs of serving Medicaid and uninsured patients. As these and other restrictions have been phased in, Medicaid DSH payments have dropped from a peak of $17.9 billion in 1995 to $14.7 billion in 1996. However, the legislation did not restrict states’ use of intergovernmental transfers. Creative financing mechanisms involving DSH payments to public hospitals and intergovernmental transfers are still possible, although the limit for DSH payments of 100 percent of unrecovered costs constrains the hospitals from recovering more than their actual costs. The federal government has never shared in the costs of services provided to adults in IMDs because mental health services have traditionally been considered a state and local responsibility. These hospitals may be reimbursed by Medicaid for services for patients younger than 21 or older than 64. They are also eligible for DSH payments, like other hospitals, if their Medicaid utilization rate is at least 1 percent. The majority of IMDs that receive DSH payments are state psychiatric hospitals. Medicaid DSH payments in 1996 to state psychiatric hospitals in the six states were generally far larger than those to other types of hospitals. The states in our review devoted a significant share, from 20 to 89 percent in 1996, of their total DSH expenditures to state mental hospitals. DSH payments to state psychiatric hospitals, and other state-owned hospitals, enabled states to obtain federal Medicaid matching funds benefiting the state treasury. The Balanced Budget Act of 1997 should reduce the DSH payments to state psychiatric hospitals from 1996 levels in some of our study states, because it limits the proportion of a state’s DSH spending that may be paid to state psychiatric hospitals. However, the amount of the reductions will depend in part on how states use the flexibility inherent in the Medicaid program. Four of the six states in our study made DSH payments to state psychiatric hospitals that were larger on average than payments to any other type of hospital. In Michigan and Texas, payments to state psychiatric hospitals were on average less than to other state-owned hospitals. However, in both Michigan and Texas, payments to state psychiatric hospitals still averaged far more than payments to local public and private hospitals. Table 1 shows the average 1996 DSH payment for each type of hospital in the states we reviewed. To determine whether the large DSH payments were a function of hospital size, we compared the average DSH payment per bed day for each type of hospital in the six study states. For five states, this ratio was greater for state psychiatric hospitals—and for other state-owned hospitals in one state—than for other types of hospitals, indicating that the difference in average DSH payments between groups does not result from differing hospital size. For example, Kansas state psychiatric hospitals received more than $150 in DSH for each bed day, while private hospitals received about $5, and average DSH payments per bed day to other state-owned hospitals and local public hospitals were about $19 and $11, respectively. Table 2 shows the average DSH payment per inpatient bed day in 1996 for the different types of hospitals in the six states. DSH payments made to state psychiatric hospitals account for a significant portion of the total DSH payments made in these six states. In fact, three of the six allocated more than half of their total DSH spending to state psychiatric hospitals. Table 3 shows the percentage of total DSH payments made to state psychiatric hospitals in 1996. DSH payments to state psychiatric hospitals benefited the state by the amount of the federal portion of the DSH payment, because the federal funds were returned to the state treasury or replaced money the state would otherwise have needed to spend for hospital operations. For example, DSH payments made to New Hampshire Hospital, a state psychiatric hospital, are treated as board-and-care revenue to the hospital and returned to the state’s general fund. The DSH payment returned to the treasury consists of both state funds spent and the federal contribution, resulting in a gain to the state treasury of the federal portion of the DSH payment, or 50 percent. In other states, officials told us that Medicaid DSH payments to state-operated hospitals reduced, by the federal share of the DSH payments, the amount of state funds spent to operate the hospital. For example, officials from Texas told us that the availability of DSH payments to state psychiatric hospitals has allowed the state to change its financing for these hospitals. They told us that while appropriation statutes for state psychiatric hospitals provided for general state revenues to cover full hospital operations, the amount of state-appropriated funds actually spent to operate these hospitals is reduced by the federal share of the DSH payment. If the DSH payment were not available, more of the appropriated funds would actually be spent on hospital operations. State psychiatric hospitals in the six states generally served relatively fewer Medicaid patients than other hospitals while receiving larger DSH payments. Only 6 of 34 state psychiatric hospitals in the six states have a Medicaid utilization rate higher than 25 percent. However, this calculation does not include patients between ages 21 and 65 who would have been eligible for Medicaid coverage if they were not in an IMD. Some of these hospitals serve many children covered by Medicaid. States are allowed to designate other hospitals to receive DSH payments as long as they have at least 1-percent Medicaid utilization. Average Medicaid utilization rates for state psychiatric hospitals in 1996 ranged from 3.1 percent in Texas to 22.1 percent in Kansas. In three states, at least one IMD had a Medicaid utilization rate close to the 1-percent minimum necessary to qualify for DSH. For example, one of the eight state psychiatric hospitals in Texas had a 1.4-percent rate, and five other Texas state psychiatric hospitals had rates lower than 3 percent. Other types of hospitals, with lower DSH payments, generally had higher, and in some cases much higher, Medicaid utilization rates. Other state-owned hospitals in Maryland, for example, had average Medicaid utilization rates five times as great as the average for the state’s psychiatric hospitals. North Carolina private hospitals averaged 19-percent Medicaid utilization, but the state’s four state psychiatric hospitals averaged less than half that rate, and the state psychiatric hospital with the highest rate (18 percent) still fell below the private hospitals’ average. Table 4 shows the average Medicaid utilization for each type of hospital for our study states in 1996. An exception to the pattern of higher Medicaid utilization rates in private hospitals is New Hampshire. There, the only state psychiatric hospital has a center for children, about 80 percent of whom qualify for Medicaid. State psychiatric hospitals generally received DSH payments at or near the maximum allowed by Medicaid rules, while other hospitals often received payments that were well below their maximums. Within federal limits, states targeted DSH payments to state psychiatric hospitals and in some cases to other state-owned hospitals. Local public hospitals and private hospitals generally received DSH payments at rates that were a smaller proportion of the maximum allowable. In some cases, the proportions were much smaller, as for local public hospitals in Kansas, which received only 8 percent of the maximum the state could have paid them. Table 5 shows the percentage of the maximum allowable DSH payments made to each group of hospitals for our study states in 1996. Individual hospital maximum DSH payments were established by the Omnibus Budget Reconciliation Act of 1993, which limits each hospital’s DSH payment to its cost of care for uninsured and Medicaid patients, less payments received from them or on their behalf. The cost of care for patients who have insurance is not included in the determination. Similarly, state and local funds appropriated to a hospital are not included in the calculation of individual hospital limits. DSH payments to other state-owned hospitals can provide to the state benefits similar to those of large DSH payments to state psychiatric hospitals. In some states, these hospitals used intergovernmental transfers to return their DSH funds to the state treasury. In addition, officials from North Carolina told us that local public hospitals returned the majority of their DSH payments to the state. In state fiscal year 1996, state psychiatric hospitals in the six states we reviewed received between 20 and 89 percent of total Medicaid DSH payments, even though state psychiatric hospitals represented a much smaller portion of the number of hospitals in the states and even though state psychiatric hospitals often had lower Medicaid utilization rates than other hospitals. In each of the six states, payments to state psychiatric hospitals covered more than 90 percent of the maximum allowable payment to state psychiatric hospitals. These large DSH payments have enabled states to obtain federal matching funds that indirectly cover costs of services that state Medicaid programs cannot pay for directly. Implementation of restrictions on payments to IMDs in the Balanced Budget Act of 1997 should reduce some of these large payments. We provided a draft of this report to the HCFA Administrator for review and comment. HCFA officials who reviewed the report told us that the report was accurate. They pointed out that although DSH payments to IMDs enable the states to obtain federal matching funds to indirectly cover the costs of services provided to patients in IMDs that Medicaid cannot pay for directly, this is within the rules of the Medicaid program. They also suggested some technical changes to the report, and we modified the text to reflect their comments. We also discussed the information in the report on the states with officials in each state. They provided technical comments that we incorporated as appropriate. We are sending copies of this report to the Secretary of HHS, the Administrator of HCFA, state officials in the states we contacted, and others who are interested. We will also make copies available to others upon request. Please call me at (202) 512-7114 or Leslie G. Aronovitz at (312) 220-7600 if you or your staff have any questions about this report. Other major contributors to this report include Paul D. Alcocer, Robert T. Ferschl, Barbara A. Mulliken, and Paul T. Wagner, Jr. William J. Scanlon Director, Health Financing and Systems Issues The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed Medicaid disproportionate share hospital (DSH) program payments to state psychiatric institutions, focusing on: (1) how the amount of DSH payments to state psychiatric hospitals compares with DSH payments made to other types of hospitals; (2) how the proportion of Medicaid beneficiaries in state psychiatric hospitals compares with the proportion in other state hospitals; and (3) what proportion of the maximum allowable DSH payment states paid state psychiatric hospitals compared with the proportion of the maximum allowable paid to other types of hospitals. GAO noted that: (1) Medicaid DSH payments to state psychiatric hospitals were far larger on average than payments made to other types of local public and private hospitals in states GAO contacted, enabling the states to obtain federal matching funds to indirectly cover costs of services provided to patients in institutions for mental diseases (IMD) that Medicaid cannot pay for directly; (2) overall, DSH payments to state psychiatric hospitals averaged about $29 million per hospital compared with $1.75 million for private hospitals; (3) in four of the six states, the average DSH payments to state psychiatric hospitals were also much larger than those to other state-owned hospitals; (4) in the two other states, DSH payments to the other state-owned hospitals were larger than payments to state psychiatric hospitals; (5) in all but one state, the average DSH payment per bed day was also much higher for state psychiatric hospitals than for other types of hospitals, indicating that the large DSH payments were not simply a function of hospital size; (6) the Balanced Budget Act of 1997 limits the proportion of a state's DSH payment that can be paid to IMDs, which should reduce such payments to state psychiatric hospitals in at least some cases; (7) state psychiatric hospitals receiving DSH payments in five of the six states GAO reviewed often served smaller proportions of Medicaid patients than other state-owned, local public, and private hospitals; (8) for example, the 1996 average Medicaid utilization rate at Texas state psychiatric hospitals was about 3 percent, while the average rate at other types of hospitals was much higher, up to 37 percent at local public hospitals; (9) however, in one state, the state psychiatric hospital served a higher proportion of Medicaid patients than other hospitals receiving DSH payments; (10) the states in GAO's review allocated DSH funds to state psychiatric hospitals at or near the maximum allowed by Medicaid rules and made DSH payments to other hospitals that were far below their limits; (11) each of the six states made 1996 DSH payments to its state psychiatric hospitals at more than 90 percent of the maximum allowable amount, and four of the six states paid these hospitals the maximum allowed; (12) other types of hospitals often received much less; and (13) for example, local public hospitals in Kansas as well as private hospitals in Michigan and North Carolina all received, on average, less than 10 percent of their allowed maximum. |
To carry out its responsibilities under the nation’s environmental laws, EPA conducts an array of activities, such as promulgating regulations; issuing and denying permits; approving state programs; and issuing enforcement orders, plans, and other documents. Many of these activities may be subject to legal challenge.and Clean Water Act require EPA to take certain actions, such as issuing rules, to implement provisions of the law within certain statutorily designated time frames, and EPA is subject to legal challenge for not taking the mandatory actions by the required deadline. If the legal challenge is a deadline suit, EPA works with Justice to consider several factors in determining whether or not to settle the deadline suit and the terms of any settlement. Statutes establishing programs administered by EPA, and under which the agency may be sued, include 10 of the nation’s most prominent environmental laws, such as the Clean Air Act; Clean Water Act; Comprehensive Environmental Response, Compensation, and Liability Act (better known as the Superfund law); Emergency Planning and Community Right-to-Know Act; Federal Insecticide, Fungicide, and Rodenticide Act and related provisions of the Federal Food, Drug, and Cosmetic Act; Resource Conservation and Recovery Act; Safe Drinking Water Act; and Toxic Substances Control Act. Generally, the federal government has immunity from lawsuits, but federal laws authorize three types of suits related to EPA’s implementation of environmental laws. First, most of the major environmental statutes include “citizen suit” provisions authorizing citizens—including individuals, associations, businesses, and state and local governments—to sue EPA when the agency fails to perform an action mandated by law. These suits include deadline suits. Second, the major environmental statutes typically include judicial review provisions authorizing citizens to challenge certain EPA actions, such as promulgating regulations or issuing permits. Third, the Administrative Procedure Act authorizes challenges to certain agency actions that are considered final actions, such as rulemakings and decisions on permit applications. As a result, even if a particular environmental statute does not authorize a challenge against EPA for a final decision or regulation, the Administrative Procedure Act may do so. A lawsuit challenging EPA’s failure to act may begin when the aggrieved party sends EPA a notice of intent to sue, if required, and a lawsuit challenging a final EPA action begins when a complaint is filed in court. Before EPA takes final action, the public or affected parties generally have opportunities to provide comments and information to the agency. In addition, administrative appeals procedures are available—and in many cases required—to challenge EPA’s final action without filing a lawsuit in a court. For example, citizens can appeal an EPA air emission permit to the agency’s Environmental Appeals Board. These administrative processes provide aggrieved parties with a forum that may be faster and less costly than a court. Generally, the environmental statutes’ citizen suit provisions require a prospective plaintiff to first send EPA a formal notice of intent to sue. Conversely, neither these statutes’ judicial review provisions nor the Administrative Procedure Act impose a notice requirement. between the aggrieved party and EPA may occur anytime after the agency action, at any point during active litigation, and even after judgment. FWS’s mission is to work with others to conserve, protect, and enhance fish, wildlife, and plants and their habitats for the continuing benefit of the American people. FWS is responsible for administering the Endangered Species Act for freshwater and land species. Under the act, FWS works to implement its requirements, such as consulting with federal agencies to determine if actions may affect listed species or habitats identified as critical to the species’ survival, and acting on applications for permits required when non-federal activities will result in take of threatened or endangered species. The act authorizes parties to file challenges to government actions affecting threatened and endangered species. These lawsuits can include deadline suits as well as other types of lawsuits. EPA has primary regulatory authority that allows citizens to file a deadline suit for laws including the following: the Superfund law; Clean Air Act, Clean Water Act, Emergency Planning and Community Right-to-Know Act, Safe Drinking Water Act, Resource Conservation and Recovery Act, and Toxic Substances Control Act. According to EPA and Justice officials, when a deadline suit is filed, the agencies work together to determine how to respond to the lawsuit, including whether or not to negotiate a settlement with the plaintiff to issue a rule by an agreed upon deadline or allow the lawsuit to proceed. In making this decision, EPA and Justice consider several factors to determine which course of action is in the best interest of the government. According to EPA and Justice officials, these factors include (1) the cost of litigation, (2) the likelihood that EPA will win the case if it goes to trial, and (3) whether EPA and Justice believe they can negotiate a settlement that will provide EPA with sufficient time to complete a final rule if required to do so. EPA and Justice officials have often chosen to settle deadline suits when EPA has failed to fulfill a mandatory duty because it is very unlikely that the government will win the lawsuit. In many such cases, the only dispute is over the appropriate remedy (i.e., fixing a new date by which EPA should act). Additionally, in such cases, officials may believe that negotiating a settlement is the course of action most likely to create sufficient time for EPA to complete the rulemaking if it is required to issue a rule. EPA and Justice have an agreement under which both must concur in the settlement of any case in which Justice represents EPA. See 28 C.F.R. §§ 0.160-0.163. related to water quality criteria for pathogens and pathogen indicators. The Meese memorandum also provides that Justice should not enter into a settlement agreement that interferes with the agency’s authority to revise, amend, or promulgate regulations through the procedures set forth in the Administrative Procedure Act. As such, EPA officials stated that they have not, and would not agree to settlements in a deadline suit that finalizes the substantive outcome of the rulemaking or declare the substance of the final rule. As discussed in our August 2011 report, the number of environmental litigation cases brought against EPA each year from fiscal year 1995 through fiscal year 2010 varied with no discernible trend. Similarly, data available from Justice, the Department of the Treasury, and EPA show that the costs associated with environmental litigation cases against EPA have varied from year to year for fiscal years 1998 through 2010, averaging at least $3.6 million per year with no discernible trend. Information regarding lawsuits against FWS is limited, with FWS data showing that the agency paid about $1.6 million in 26 cases from fiscal years 2004 through 2010. In August 2011, we reported that there were no aggregated data on environmental litigation or associated costs reported by federal agencies. The key agencies involved—Justice, EPA and Treasury— maintained certain data on individual cases in decentralized databases. In particular, each of Justice’s litigation components maintained a separate case management system to gather information related to individual cases. We were able to merge cases from two systems for purposes of our work. The average number of new cases filed against EPA each year was 155, ranging from a low of 102 new cases filed in fiscal year 2008 to a high of 216 cases filed in fiscal year 1997 (see fig. 1). In all, Justice defended EPA in nearly 2,500 cases from fiscal year 1995 through fiscal year 2010. The greatest number of cases was filed in fiscal year 1997, which, according to a Justice official, may be explained by the fact that EPA revised its national ambient air quality standards for ozone and particulate matter in 1997, which may have caused some groups to sue. In addition, according to the same official, in 1997 EPA implemented a “credible evidence” rule, which also was the subject of additional lawsuits. The fewest cases against EPA (102) were filed in fiscal year 2008, and Justice officials were unable to pinpoint any specific reasons for the decline. In fiscal years 2009 and 2010, the caseload increased. A Justice official said that it is difficult to know why the number of cases might increase because litigants sue for different reasons, and some time might elapse between an EPA action and a group’s decision to sue. As shown in figure 2, most cases against EPA were brought under the Clean Air Act, which represented about 59 percent of the approximately 2,500 cases that were filed during the 16-year period of our August 2011 report (i.e., fiscal year 1995 through fiscal year 2010). Cases filed under the Clean Water Act represented the next largest group of cases (20 percent), and the Resource Conservation and Recovery Act represented the third largest group of cases (6 percent). The lead plaintiffs filing cases against EPA during the 16-year period of our August 2011 report fit into several categories. The largest category comprised trade associations (25 percent), followed by private companies (23 percent), local environmental groups and citizens’ groups (16 percent), and national environmental groups (14 percent). Individuals, states and territories, municipal and regional government entities, unions and workers’ groups, tribes, universities, and a small number of others we could not identify made up the remaining plaintiffs (see table 1). According to the stakeholders we interviewed for our August 2011 report, a number of factors—particularly a change in presidential administration, the passage of regulations or amendments to laws, and EPA’s failure to meet statutory deadlines—affect plaintiffs’ decisions to bring litigation against EPA. Stakeholders did not identify any single factor driving litigation, but instead, attributed litigation to a combination of different factors. According to most of the stakeholders we interviewed, a new presidential administration is an important factor in groups’ decisions to bring suits against EPA. Some stakeholders suggested that a new administration viewed as favoring less enforcement could spur lawsuits from environmental groups in response, or industry groups could sue to delay or prevent the outgoing administration’s actions. Other stakeholders suggested that if an administration is viewed as favoring greater enforcement of rules, industry may respond to increased activity by bringing suit against EPA to delay or prevent the administration’s actions, and certain environmental groups may bring suit with the aim of ensuring that required agency actions are completed during an administration they perceive as having views similar to the groups’ own. Most of the stakeholders interviewed also said that the development of new EPA regulations or the passage of amendments to environmental statutes may lead parties to file suit against the new regulations or against EPA’s implementation of those amendments. One stakeholder noted that an industry interested in a particular issue may become involved in litigation related to the development of regulations because it wishes to be part of the regulatory process and negotiations that result in a mutually acceptable rule. Data available for our August 2011 report from Justice, Treasury, and EPA show that the costs associated with environmental litigation cases against EPA have varied from year to year with no discernible trend. Justice’s Environment and Natural Resources Division spent a total of about $46.9 million to defend EPA in these cases from fiscal year 1998 through fiscal year 2010, averaging at least $3.6 million per year. Some cost data from Justice were not available, however, in part because Justice’s Environment and Natural Resources Division and the U.S. Attorneys’ Offices did not have a standard approach for maintaining key data for environmental litigation cases. For example, while the Environment and Natural Resources Division tracked attorney hours by case, the U.S. Attorneys’ Offices did not. In addition, owing to statutory requirements to pay certain successful plaintiffs for attorney fees and costs, Treasury paid a total of about $15.5 million to prevailing plaintiffs for attorney fees and costs related to these cases for fiscal years 2003 through 2010, averaging about $2 million per year. EPA paid a total of $1.5 million from fiscal year 2006 through fiscal year 2010 in attorney fees and costs, averaging about $305,000 per year. In April 2012, we reported that the FWS did not use a data system for cases brought against FWS to track attorney fees and costs paid by the Endangered Species Program but that the agency tracked this information in its Washington office using a spreadsheet. FWS officials gathered information on those cases paid by the Washington office and supplemented the information with four endangered species cases identified by the agency’s regional offices. However, not all regional offices tracked attorney fee payments, so the data may not be complete for fiscal years 2004 through 2010. That is, FWS officials were not sure that they had provided the complete universe of cases. FWS data show that FWS paid about $1.6 million in the 26 cases from fiscal years 2004 through 2010. In December 2014, we reported that the terms of settlements in deadline suits that resulted in EPA issuing major rules from May 31, 2008, through June 1, 2013, established a schedule for issuing rules. Specifically, the settlements were to either promulgate a statutorily required rule or make a determination that doing so is not appropriate or necessary pursuant to the relevant statutory provision. EPA received public comments on all but one of the draft settlements in these suits. According to EPA officials we interviewed for our December 2014 report, settlements in deadline suits primarily affected a single office within EPA because most deadline suits are based on provisions of the Clean Air Act for which that office is responsible. In our December 2014 report, we found that EPA issued 32 major rules from May 31, 2008, through June 1, 2013. According to EPA officials, the agency issued 9 of these rules following settlements in seven deadline suits. They were all Clean Air Act rules. Two of the settlements established a schedule to complete 1 or more rules, and five settlements established a schedule to complete 1 or more rules or make a determination that such a rule was not appropriate or necessary in accordance with the relevant statute. Some of the schedules included interim deadlines for conducting rulemaking tasks, such as publishing a notice of proposed rulemaking in the Federal Register. In addition to schedules, the seven settlements also included, among other things, provisions that allowed deadlines to be modified (including the deadline to issue the final rule) and specified that nothing in the settlement can be construed to limit or modify any discretion accorded EPA by the Clean Air Act or by general principles of administrative law. Consistent with Justice’s 1986 Meese memorandum, none of the settlements we reviewed included terms that required EPA to take an otherwise discretionary action or prescribed a specific substantive outcome of the final rule. The Clean Air Act requires EPA, at least 30 days before a settlement under the act is final or filed with the court, to publish a notice in the Federal Register intended to afford persons not named as parties or interveners to the matter or action a reasonable opportunity to comment in writing. EPA or Justice, as appropriate, must then review the comments and may withdraw or withhold consent to the proposed settlement if the comments disclose facts or considerations that indicate consent to the settlement is inappropriate, improper, inadequate, or inconsistent with Clean Air Act requirements. The other major environmental laws with provisions that allow citizens to file deadline suits do not have a notice and comment requirement for proposed settlements. According to an EPA official, with the exception of the agency’s pesticide program, EPA generally does not ask for public comments on defensive settlements (i.e., settlements on cases in which EPA is being sued) if the agency is not required to do so by statute. Of the 32 major rules that EPA issued from May 31, 2008 to June 1, 2013, 9 rules following seven settlements in deadline suits were Clean Air Act rules. For each settlement, EPA published a notice in the Federal Register providing the public the opportunity to comment on a draft of the settlement. EPA received from 1 to 19 public comments on six of the draft settlements. No comments were received on one of the draft settlements. Based on EPA summaries of the comments, the comments concerned the reasonableness of the deadlines contained in the settlements or supported or objected to the settlements. For example, some comments supported the deadline or asserted that the deadlines should be accelerated, and other comments stated that EPA would have difficulty meeting the deadlines. EPA determined that none of the comments on any of the draft settlements disclosed facts or considerations that indicated that consent to the settlement in question would be inappropriate, improper, inadequate, or inconsistent with the act. According to EPA officials interviewed for our December 2014 report, settlements in deadline suits primarily affected a single office within EPA—the Office of Air Quality Planning and Standards (OAQPS)— because most deadline suits were based on provisions of the Clean Air Act for which that office is responsible. According to EPA’s Office of General Counsel, provisions in the Clean Air Act that authorize the National Ambient Air Quality Standards program and Air Toxics program account for most deadline suits. These provisions have recurring deadlines requiring EPA to set standards and to periodically review—and revise as appropriate or necessary—those standards. OAQPS sets these standards through the rulemaking process. For example, the Clean Air Act requires EPA to review and revise as appropriate National Ambient Air Quality Standards every 5 years and to review and revise as necessary technology standards for numerous air toxics generally every 8 years. The effect of settlements in deadline suits on EPA’s rulemaking priorities is limited to timing and order. OAQPS officials said that deadline suits affect the timing and order in which rules are issued by the National Ambient Air Quality Standards program and the Air Toxics program, but not which rules are issued. The officials also noted that the effect of deadline suits on the two programs differs because of the different characteristics of the programs. In conclusion, environmental statutes allow litigation to check the authority of federal agencies as they carry out—or fail to carry out—their duties. Available data did not show discernible trends in the number of cases or costs associated with the litigation against EPA and there was limited information on FWS. Information on deadline suits showed that the effect of settlements in deadline suits was primarily on one office and limited to the timing and order in which rules were issued. Chairman Rounds, Ranking Member Markey, and Members of the Subcommittee, this concludes my prepared statement. I would be pleased to answer any questions you may have at this time. If you or your staff members have any questions about this testimony, please contact me at (202) 512-3841 or gomezj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Susan Iott (Assistant Director), Charlie Egan, Cindy Gilbert, Rich Johnson, Tracey King, Marya Link, Maria Strudwick, and Kiki Theodoropoulos made key contributions to this testimony. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Environmental statutes, such as the Clean Air Act and Clean Water Act, allow citizens to file suit against EPA to challenge certain agency actions, such as issuing regulations or rules. Such laws also require EPA to take certain actions, such as issuing rules, to implement provisions of the law within certain statutorily designated time frames. Citizens can sue EPA to compel the agency to take required actions, such as issuing a rule on time, in lawsuits often called deadline suits. EPA can negotiate a settlement to issue a rule by an agreed upon deadline. Where EPA is named as a defendant, Justice provides EPA's legal defense. If successful, plaintiffs may be paid for certain attorney fees and costs. Payments are made from Treasury's Judgment Fund or EPA's appropriations. Under the Endangered Species Act, FWS also faces lawsuits over its regulations and actions to carry out the act. As with EPA, Justice defends suits against FWS in court. This testimony is based on GAO reports issued from August 2011 through December 2014 about litigation directed at EPA and FWS. It focuses on (1) information on cases and associated costs, as available, for EPA and FWS and (2) information on the impact of deadline cases on EPA rulemaking. GAO did not make recommendations in the reports on which this testimony is based and is not making any in this testimony. As GAO reported in August 2011, the Environmental Protection Agency (EPA) faces legal challenges implementing the nation's key environmental laws. The number of environmental litigation cases brought against EPA each year for fiscal years 1995 through 2010 varied with no discernible trend. Data available from the Department of Justice, the Department of the Treasury, and EPA show that the costs associated with such cases against EPA have also varied from year to year with no discernible trend. Specifically, Justice staff defended EPA on an average of about 155 such cases each year from fiscal years 1995 through 2010, for a total of about 2,500 cases during that time. Most cases were filed under the Clean Air Act (59 percent of cases) and the Clean Water Act (20 percent of cases). According to stakeholders GAO interviewed, a number of factors—particularly a change in presidential administrations, new regulations or amendments to laws or EPA's not meeting statutorily required deadlines—affected environmental litigation. Justice spent at least $46.9 million, averaging $3.6 million annually, to defend EPA in court from fiscal years 1998 through 2010. In addition, owing to statutory requirements to pay certain successful plaintiffs for attorney fees and costs, the Treasury paid about $15.5 million from fiscal years 2003 through 2010—averaging about $2 million per fiscal year—to plaintiffs in environmental cases. EPA paid approximately $1.5 million from fiscal years 2006 through 2010—averaging about $305,000 per fiscal year—to plaintiffs for environmental litigation claims. (All amounts are in constant 2015 dollars.) As one of the primary agencies responsible for implementing the Endangered Species Act, the U.S. Fish and Wildlife Service (FWS) faces litigation over its regulations and actions to carry out provisions of the act. In April 2012, GAO reported that FWS did not use a data system to track cases and associated fees and costs it paid. As a result, information regarding cases against FWS and associated costs was limited, with FWS data showing that the agency paid about $1.6 million in 26 cases from fiscal years 2004 through 2010. As GAO reported in December 2014, of the 32 major rules that EPA stated it promulgated from May 31, 2008 to June 1, 2013, nine were issued following seven settlements in deadline lawsuits, all under the Clean Air Act. The terms of the settlements in these deadline suits established a schedule to issue a statutorily required rule(s) or to issue a rule(s) unless EPA determined that doing so was not appropriate or necessary pursuant to the relevant statutory provision. None of the seven settlements included terms that finalized the substantive outcome of a rule. The impact of settlements in deadline suits on EPA's rulemaking priorities was limited primarily to one office within EPA—the Office of Air Quality Planning and Standards (OAQPS)—because most deadline suits are based on provisions of the Clean Air Act for which that office is responsible. These provisions have recurring deadlines requiring EPA to set standards and to periodically review—and revise as necessary—those standards. OAQPS sets these standards through the rulemaking process. OAQPS officials said that deadline suits affect the timing and order in which rules are issued. |
Three recent bills have been introduced to change the overall leadership and management of programs to combat terrorism and homeland security. On February 8, 2001, Representative Gilchrest introduced H.R. 525, the Preparedness Against Domestic Terrorism Act of 2001, which proposes establishing a President’s Council on Domestic Terrorism Preparedness within the Executive Office of the President to address preparedness and consequence management issues. On March 21, 2001, Representative Thornberry introduced H.R. 1158, the National Homeland Security Act, which advocates the creation of a cabinet-level head within the proposed National Homeland Security Agency to lead homeland security activities. On March 29, 2001, Representative Skelton introduced H.R. 1292, the Homeland Security Strategy Act of 2001, which calls for the development of a homeland security strategy developed by a single official designated by the President. Related proposals from congressional committee reports and congressionally chartered commissions provide additional, often complementary, options for structuring and managing federal efforts to combat terrorism. These include Senate Report 106-404 to Accompany H.R. 4690 on the Departments of Commerce, Justice, and State, the Judiciary, and Related Agencies Appropriation Bill 2001, submitted by Senator Gregg on September 8, 2000; the report by the Gilmore Panel (the Advisory Panel to Assess Domestic Response Capabilities for Terrorism Involving Weapons of Mass Destruction, chaired by Governor James S. Gilmore, III) dated December 15, 2000; and the report of the Hart-Rudman Commission (the U.S. Commission on National Security/21st Century, chaired by Senators Gary Hart and Warren B. Rudman) dated January 31, 2001. H.R. 1158 is based upon the report of the Hart-Rudman Commission. The bills and related proposals vary in the scope of their coverage. H.R. 525 focuses on federal programs to prepare state and local governments for dealing with domestic terrorist attacks. Both H.R. 1158 and H.R. 1292 focus on the larger issue of homeland security that includes threats other than terrorism, such as military attacks. However, only H.R. 1292 includes a specific definition of homeland security. The Senate Report 106-404 proposal is limited to domestic terrorism preparedness, including programs for both crisis and consequence management. The Gilmore Panel report includes both international and domestic terrorism programs. The Hart-Rudman Commission report (like H.R. 1158) focuses on the larger issue of homeland security. The bills and related proposals also vary in where they locate the focal point for overall leadership. Federal efforts to combat terrorism are inherently difficult to lead and manage because the policy, strategies, programs, and activities to combat terrorism cut across more than 40 agencies. The bills and related proposals would create a single focal point for programs to combat terrorism, and some would have the focal point perform many of the same functions. For example, some of the proposals would have the focal point lead efforts to develop a national strategy. The proposals (with one exception) would have the focal point appointed with the advice and consent of the Senate. The various bills and proposals differ in where they would locate the focal point for overall leadership and management. The two proposed locations for the focal point are in the Executive Office of the President or in a Lead Executive Agency. Table 1 summarizes the various bills and proposals regarding the focal point for overall leadership, the scope of its activities, and its location. Based upon our analysis of legislative proposals, various commission reports, and our ongoing discussions with agency officials, each of the two locations for the focal point—the Executive Office of the President or a Lead Executive Agency—has its potential advantages and disadvantages. An important advantage of placing the position with the Executive Office of the President is that the focal point would be positioned to rise above the particular interests of any one federal agency. Another advantage is that the focal point would be located close to the President to resolve cross agency disagreements. A disadvantage of such a focal point would be the potential to interfere with operations conducted by the respective executive agencies. Another potential disadvantage is that the focal point might hinder direct communications between the President and the cabinet officers in charge of the respective executive agencies. Alternately, a focal point with a Lead Executive Agency could have the advantage of providing a clear and streamlined chain of command within an agency in matters of policy and operations. Under this arrangement, we believe that the Lead Executive Agency would have to be one with a dominant role in both policy and operations related to combating terrorism. Specific proposals have suggested that this agency could be either the Department of Justice (per Senate Report 106-404) or an enhanced Federal Emergency Management Agency (per H.R. 1158 and its proposed National Homeland Security Agency). Another potential advantage is that the cabinet officer of the Lead Executive Agency might have better access to the President than a mid-level focal point with the Executive Office of the President. A disadvantage of the Lead Executive Agency approach is that the focal point—which would report to the cabinet head of the Lead Executive Agency—would lack autonomy. Further, a Lead Executive Agency would have other major missions and duties that might distract the focal point from combating terrorism. Also, other agencies may view the focal point’s decisions and actions as parochial rather than in the collective best interest. Based upon the problems we have identified during 5 years of GAO evaluations, we believe the following actions need to be taken: (1) create a single high-level federal focal point for policy and coordination, (2) develop a comprehensive threat and risk assessment, (3) develop a national strategy with a defined end state to measure progress against, (4) analyze and prioritize governmentwide programs and budgets to identify gaps and reduce duplication of effort, and (5) coordinate implementation among the different federal agencies. The three bills would collectively address many of these actions. We will now discuss each of these needed actions, executive branch attempts to complete them, and how the three bills would address them. In our testimony last May, we reported that overall federal efforts to combat terrorism were fragmented. To provide a focal point, the President appointed a National Coordinator for Security, Infrastructure Protection, and Counterterrorism at the National Security Council. This position, however, has significant duties indirectly related to terrorism, including infrastructure protection and continuity of government operations. Notwithstanding the creation of this National Coordinator, it was the Attorney General who led interagency efforts to develop a national strategy. Thus, at least two top officials are responsible for combating terrorism, and both of them have other significant duties. H.R. 525 would set up a single, high-level focal point in the President’s Council on Domestic Terrorism Preparedness. In addition, H.R. 525 would require that the new Council’s executive chairman—who would represent the President as chairman—be appointed with the advice and consent of the Senate. This last requirement would provide Congress with greater influence and raise the visibility of the office. H.R. 1158 would designate the Director of the proposed National Homeland Security Agency as the focal point for policy and coordination. As with H.R. 525, the appointment of the Director by the President and with the advice and consent of the Senate, provides Congress with greater influence and raises the visibility of the office. H.R. 1292 would require the President to designate a single official within the U.S. government to be responsible and accountable to the President concerning homeland security. We testified in July 2000 that one step in developing sound programs to combat terrorism is to conduct a threat and risk assessment that can be used to develop a strategy and guide resource investments. Based upon our recommendation, the executive branch has made progress in implementing our recommendations that threat and risk assessments be done to improve federal efforts to combat terrorism. However, we remain concerned that such assessments are not being coordinated across the federal government. H.R. 525 would require a threat, risk, and capability assessment that examines critical infrastructure vulnerabilities, evaluates federal and applicable state laws used to combat terrorist attacks, and evaluates available technology and practices for protecting critical infrastructure against terrorist attacks. This assessment would form the basis for the domestic terrorism preparedness plan and annual implementation strategy. Although H.R. 1158 would not require the National Homeland Security Agency Director to conduct a threat and risk assessment, it directs this individual to establish and maintain strong mechanisms for sharing information and intelligence with U.S. and international intelligence entities. Information and intelligence sharing may help identify potential threats and risks against which the United States could direct resources and efforts. H.R. 1292 would require the President to conduct a comprehensive homeland security threat and risk assessment. This assessment would be the basis for a comprehensive national strategy. In our testimony last July, we noted that the United States has no comprehensive national strategy that could be used to measure progress.The Attorney General’s Five-Year Plan represents a substantial interagency effort to develop a federal strategy, but it lacks defined outcomes. The Department of Justice believes that their current plan has measurable outcomes about specific agency actions. However, in our view, the plan needs to go beyond this to define an end state. As we have previously testified, the national strategy should incorporate the chief tenets of the Government Performance and Results Act of 1993 (P.L. 130-62). The Results Act holds federal agencies accountable for achieving program results and requires federal agencies to clarify their missions, set program goals, and measure performance toward achieving these goals. H.R. 525 would require the new council to publish a domestic terrorism preparedness plan with objectives and priorities; an implementation plan; a description of roles of federal, state, and local activities; and a defined end state with measurable standards for preparedness. H.R. 1158 would require the annual development of a federal response plan for homeland security and emergency preparedness and would require the Director to provide overall planning and guidance to federal agencies concerning homeland security. The bill would require the Director to work with state and local governments, but it would not explicitly require that the plan include the roles of state and local governments. H.R. 1292 would require the President to develop a strategy and multiyear phased implementation plan and budget for antiterrorism and consequence management. The bill requires the inclusion of specific, measurable objectives based on findings identified in a threat and risk assessment. Furthermore, it requires the strategy to (1) define federal agencies’ responsibilities; (2) permit the selective use of military personnel and assets without infringing on civil liberties; (3) provide for the use of intelligence assets and capabilities; and (4) augment existing medical response capabilities and equipment stockpiles at the federal, state, and local levels. In our December 1997 report, we reported that there was no mechanism to centrally manage funding requirements and requests to ensure an efficient, focused governmentwide approach to combat terrorism. Our work led to legislation that required the Office of Management and Budget to provide annual reports on governmentwide spending to combat terrorism. These reports represent a significant step toward improved management by providing strategic oversight of the magnitude and direction of spending for these programs. Yet, we have not seen evidence that these reports have established priorities or identified duplication of effort. H.R. 525 would require the new council to develop and make budget recommendations for federal agencies and the Office of Management and Budget. The Office of Management and Budget would have to provide an explanation in cases where the new council’s recommendations were not followed. The new council would also identify and eliminate duplication, fragmentation, and overlap in federal preparedness programs. H.R. 1158 would not explicitly require an analysis and prioritization of governmentwide budgets to identify gaps and reduce duplication of effort. Rather, it would require the Director to establish procedures to ensure that the planning, programming, budgeting, and financial activities of the National Homeland Security Agency use funds that are available for obligation for a limited number of years. H.R. 1292 would provide for the development of a comprehensive budget based on the homeland security strategy and would allow for the restructuring of appropriation accounts by the Director of the Office of Management and Budget as necessary to fulfill the organizational and operational changes needed to implement the national strategy. In our April 2000 testimony, we observed that federal programs addressing terrorism appear in many cases to be overlapping and uncoordinated. To improve coordination, the executive branch created organizations like the National Domestic Preparedness Office and various interagency working groups. In addition, the annual updates to the Attorney General’s Five-Year Plan now tracks individual agencies’ accomplishments. Nevertheless, we have noted that the multitude of similar federal programs have led to confusion among the state and local first responders they are meant to serve. H.R. 525 would require the new council to coordinate and oversee the implementation of related programs by federal agencies in accordance with the proposed domestic terrorism preparedness plan. The new council would also make recommendations to the heads of federal agencies regarding their programs. Furthermore, the new council would provide notification to any department that it believes has not complied with its responsibilities under the plan. H.R. 1158 would require extensive coordination among federal agencies— especially those under the National Homeland Defense Agency— concerning their activities relating to homeland security. For instance, the bill would require the agency’s Directorate of Critical Infrastructure Protection to coordinate efforts to address vulnerabilities in the U.S. critical infrastructure by working with other federal agencies to establish security policies, standards, and mechanisms and to share intelligence. Additionally, H.R. 1158 would instruct the Directorate for Emergency Preparedness and Response to coordinate activities among private sector entities and federal agencies and the bill would delegate the coordination of all U.S. border security activities to the Directorate of Prevention. H.R. 1292 would require a national strategy to provide for the coordination of federal programs. For example the strategy would identify federal agencies and their respective roles and responsibilities for homeland security. In our ongoing work, we have found that there is no consensus—in Congress, the Executive Branch, the various panels and commissions, and among organizations representing first responders—on the matters discussed in our testimony. Specifically, there is no consensus on the required scope of duties or the location for a single focal point. In addition, the three bills provide the focal point with different, but often similar, duties to improve the management of federal programs. To the extent that these three bills—or some hybrid of them all—address the problem areas we have identified above, we believe that federal programs to combat terrorism will be improved. Developing a consensus on these matters and providing the focal point with legitimacy and authority through legislation, is an important task that lies ahead. We believe that this hearing and the debate that it engenders, will help to reach that consensus. This concludes our testimony. We would be happy to answer any questions you may have. For future questions about this testimony, please contact Raymond J. Decker, Director, Defense Capabilities and Management at (202) 512-6020. Individuals making key contributions to this statement include Stephen L. Caldwell and Krislin Nalwalk. Combating Terrorism: Comments on Counterterrorism Leadership and National Strategy (GAO-01-556T, Mar. 27, 2001) Combating Terrorism: FEMA Continues to Make Progress in Coordinating Preparedness and Response (GAO-01-15, Mar. 20, 2001). Combating Terrorism: Federal Response Teams Provide Varied Capabilities; Opportunities Remain to Improve Coordination (GAO-01-14, Nov. 30, 2000). Combating Terrorism: Linking Threats to Strategies and Resources (GAO/T-NSIAD-00-218, July 26, 2000). Combating Terrorism: Comments on Bill H.R. 4210 to Manage Selected Counterterrorist Programs (GAO/T-NSIAD-00-172, May 4, 2000). Combating Terrorism: How Five Foreign Countries Are Organized to Combat Terrorism (GAO/NSIAD-00-85, Apr. 7, 2000). Combating Terrorism: Issues in Managing Counterterrorist Programs (GAO/T-NSIAD-00-145, Apr. 6, 2000). Combating Terrorism: Need to Eliminate Duplicate Federal Weapons of Mass Destruction Training (GAO/NSIAD-00-64, Mar. 21, 2000). Critical Infrastructure Protection: Comprehensive Strategy Can Draw on Year 2000 Experiences (GAO/AIMD-00-1, Oct. 1, 1999). Combating Terrorism: Need for Comprehensive Threat and Risk Assessments of Chemical and Biological Attack (GAO/NSIAD-99-163, Sept. 7, 1999). Combating Terrorism: Observations on Growth in Federal Programs (GAO/T-NSIAD-99-181, June 9, 1999). Combating Terrorism: Issues to Be Resolved to Improve Counterterrorist Operations (GAO/NSIAD-99-135, May 13, 1999). Combating Terrorism: Observations on Federal Spending to Combat Terrorism (GAO/T-NSIAD/GGD-99-107, Mar. 11, 1999). Combating Terrorism: Opportunities to Improve Domestic Preparedness Program Focus and Efficiency (GAO/NSIAD-99-3, Nov. 12, 1998). | This testimony discusses three bills that would change the overall leadership and management of programs to combat terrorism. The three bills--H.R. 525, H.R. 1158, and H.R. 1292--vary in scope. H.R. 525 focuses on federal programs to prepare state and local governments for domestic terrorist attacks. Both H.R. 1158 and H.R. 1292 focus on the larger issue of homeland security, which includes terrorism and additional threats such as military attacks. The bills are similar in that they all advocate a single focal point for programs to combat terrorism. However, some bills place the focal point in the Executive Office of the President and others place it with a lead executive agency. In addition, the three bills provide the focal point with different, but often similar, duties to improve the management of federal programs. To the extent that these three bills--or some hybrid of them--address these problem areas, GAO believes that federal programs to combat terrorism will be improved. It will be important to develop a consensus on these matters and provide the focal point with legitimacy and authority through legislation are important tasks that lie ahead. |
With passage of the Anti-Drug Abuse Act of 1988 (hereinafter referred to as the 1988 Act), Congress created ONDCP to better plan the national drug control effort and assist Congress in overseeing that effort. In this role, ONDCP is, among other things, responsible for overseeing and coordinating the efforts of federal drug control agencies and programs. ONDCP is the President’s primary policy office for drug issues, providing advice and governmentwide oversight of drug programs and coordinating development of the President’s National Drug Control Strategy. The 1988 Act, as amended, requires ONDCP to (1) develop a national drug control strategy with short- and long-term objectives and annually revise and issue a new strategy to take into account what has been learned and accomplished during the previous year, (2) develop an annual consolidated drug control budget providing funding estimates for implementing the strategy, and (3) oversee and coordinate implementation of the strategy by federal agencies. As part of its responsibility for developing the annual National Drug Control Strategy, ONDCP is also required to include in the strategy an evaluation of the effectiveness of federal drug control efforts during the previous year. This evaluation is to include assessments of the reduction in drug use, reduction in drug availability, reduction in drug use consequences, and the status of drug treatment. In developing the consolidated national drug control budget, the 1988 Act prescribes a budget review and certification process whereby ONDCP (1) receives annual drug budget submissions from each program manager, agency head, and department head with drug control responsibilities and (2) certifies in writing that these budget submissions are adequate to implement the objectives of the National Drug Control Strategy for the budget request year. ONDCP requires federal drug control agencies to follow a detailed process in developing their budget proposals. Annually, ONDCP is to develop national drug control budget submission requirements that it sends to all federal drug control agencies. These requirements identify specific programs, agencies, and departments that are to submit budgets to ONDCP; dates these budgets are due to be submitted; and specific information required to be included in each submission. In addition, ONDCP is required under the 1988 Act to provide, by July 1 of each year, budget recommendations (in the form of drug program initiatives) to the heads of departments and agencies with drug control responsibilities. The 1988 Act requires that each program manager, agency head, and department head with drug control responsibilities transmit their drug budget request to ONDCP at the same time such request is submitted to their superiors (and before submission to OMB). The ONDCP Director is then required to review each budget request and certify whether it is adequate to implement the objectives of the National Drug Control Strategy. The ONDCP Director must notify each program manager, agency head, and department head regarding its budget certification decisions. For budget requests not certified as adequate, ONDCP must recommend an initiative or funding level that would make the request adequate. The act further requires that department and/or agency heads shall comply with any such ONDCP recommendation prior to submitting their budgets to OMB. For fiscal year 1999, the national drug control budget, as enacted, totaled about $17.9 billion. This was $816 million more than the amount requested in the President’s fiscal year 1999 proposed drug budget and an increase of $1.9 billion over fiscal year 1998 enacted levels. About 67 percent of the enacted budget is to fund supply reduction activities (such as drug interdiction), with the remaining 33 percent funding demand reduction activities (such as drug treatment). The Department of Justice (DOJ) received the most departmental drug control funding for fiscal year 1999— about $7.7 billion—while the three largest agency budgets were for the Bureau of Prisons ($2.1 billion), SAMHSA ($1.5 billion), and DEA ($1.3 billion). We did our audit work between September 1998 and April 1999 in accordance with generally accepted government auditing standards. A detailed description of our objectives, scope, and methodology is contained in appendix I. ONDCP’s process for certifying fiscal year 1999 drug control agency budget submissions was generally consistent with the requirements of the 1988 Act. In some cases, ONDCP was not able to review complete agency budgets prior to making its certification decisions. In those cases, however, ONDCP was able to obtain enough information from department or agency officials to enable ONDCP to make its certification decisions. For the four agencies we reviewed in detail, the results of the budget certification process were varied. Two agencies—DEA and Customs— responded to ONDCP’s summer budget review by submitting fall drug budgets that were lower overall than those submitted in the summer. However, ONDCP determined they were still sufficient to be certified as adequate to address the National Strategy. One agency—SAMHSA— submitted a lower fall budget that did not address ONDCP’s summer recommendations, but a budget compromise was later worked out between the agency and ONDCP that enabled the budget to be certified. Finally, DOD’s fall budget was not certified because DOD did not address ONDCP’s recommended program increases. After the budget certification process was completed, ONDCP monitored the budget and appropriations debate in order to influence development of a national drug budget that was consistent with the National Drug Control Strategy. Significantly, for three of the four agencies we reviewed, their fiscal year 1999 proposed drug budgets were not increased as a result of ONDCP’s budget certification process. All four, however, were increased during the congressional appropriations phase of the budget process. As indicated in table 1 below, the fiscal year 1999 budget cycle began with the issuance of the February 1997 National Drug Control Strategy. The goals and objectives included in the National Strategy provide the overall framework for drug control agencies and departments to use in developing their initial fiscal year 1999 drug budget requests. To augment the general guidance in the National Strategy, in April 1997 ONDCP provided guidance to departmental budget directors describing the certification process, identifying budget submission deadlines, and specifying a format for each submission. While ONDCP required all departments with drug control responsibilities to submit budget requests in the fall of 1997 (prior to their submission to OMB), certain agencies, bureaus, and programs were also required to submit budget requests in the summer of 1997 (at the same time the request was submitted to their department heads). The purpose of these advance reviews, according to the guidance, is to affect funding levels requested by Cabinet officers in their subsequent budget submissions to OMB. In late June 1997, ONDCP issued additional guidance to Cabinet officers identifying drug control funding priorities, as required under the act. This guidance outlined specific program initiatives, by National Strategy goal, that were to be addressed in agencies’ budget submissions to ONDCP. Beginning with the fiscal year 1999 drug budget, ONDCP identified 30 such initiatives that agencies were to address during the next 5 years. ONDCP requested that the initial summer drug budgets be submitted by early June 1997. ONDCP budget and program analysts reviewed the drug budget submissions and assessed their adequacy to support the goals and objectives articulated in the current National Strategy—in this case, the 1997 strategy. According to ONDCP officials, the specific level of analysis is left up to the discretion of the individual budget analysts but normally involves a subjective assessment of three factors: 1. Is the budget consistent with the goals and objectives outlined in the National Strategy? 2. Does the budget address specific drug control initiatives outlined in ONDCP’s annual guidance? 3. Are funding levels consistent with overall budget trends and at amounts sufficient to carry out individual programs? Determination of adequacy, according to ONDCP internal memorandums, is not meant to be a technical analysis of the budget, but rather a collective opinion—of budget analysts, program analysts, and ultimately ONDCP managers—that agencies are asking for sufficient funding to carry out existing programs and new initiatives to support the National Strategy. During July and August 1997, ONDCP began notifying drug control agencies of the results of its summer budget proposed certification decisions. ONDCP prepared precertification letters, which it provided to agencies, identifying specific areas in the budget that must be addressed in order for the budget to be certified. ONDCP officials have stated that these letters allow the agencies time to revise their budgets prior to submitting them in the fall to OMB. It also allows ONDCP to concentrate its certification efforts earlier in the budget process, when there is more time for review and comment prior to the involvement of OMB. These letters are typically provided at the department level, except for independent agencies. For the fiscal year 1999 budget cycle, the ONDCP Director personally met with Cabinet officers or their designees to discuss funding priorities. According to ONDCP officials, the precertification letters were sent just prior to or brought to these meetings and served as informal agendas for the discussions. ONDCP requested that fall budgets be submitted prior to the time they were provided to OMB (typically during September). The fall budget review and certification process was similar to the summer process, with the reviewers looking specifically at existing programs and new initiatives that ONDCP had identified during the summer reviews (and documented in a precertification letter) as needing additional funding. Following this review, ONDCP notified each department or agency of its final certification decision. Most of the certification letters were issued in late November 1997, with two others provided in early December 1997 and one in early November 1997. ONDCP officials noted that, in making the final certification decision for each individual drug control agency, there can be a range of overall funding levels that are considered adequate to achieve the goals and objectives of the National Strategy. Therefore, although an agency’s overall drug budget may decrease from summer to fall, if the core drug control program initiatives remain adequately funded, the overall drug budget can still be certified. In making its fiscal year 1999 certification decisions, ONDCP officials told us they used a selective review approach that corresponds to the way budgets are normally prepared and submitted during the federal budget cycle—reviewing mostly agencies and programs during the summer process, and focusing on departments during the fall process. Rather than individually certifying every program, agency, and department with drug control responsibilities, ONDCP issued certification letters primarily at the department level. Using this approach, ONDCP reviewed approximately 93 percent of the proposed fiscal year 1999 national drug budget for the purpose of certification. For fiscal year 1999, ONDCP issued letters to 14 departments, agencies, or programs notifying them of its budget certification decisions. Of these, ten—the Corporation for National and Community Service, and the departments of Education, Health and Human Services, Housing and Urban Development, the Interior, Justice, Labor, Transportation, the Treasury, and Veterans Affairs—were fully certified; three—the U.S. Information Agency and the departments of State and Agriculture—were certified, although none of the three had formally submitted a complete fiscal year 1999 drug budget at the time the certification decision was made. ONDCP made its decision based on partial budgets as submitted (Agriculture) and advance budget information provided by department and agency staff (U.S. Information Agency and State). ONDCP also noted that the Department of State’s budget was only “minimally” adequate, based on the fact that State only requested increased funding for its counterdrug efforts in the Andean region, but not in Mexico or the Caribbean; and one—the Department of Defense—was decertified. This decision was made when DOD did not fund its drug control program at levels deemed necessary by the ONDCP Director. ONDCP’s budget recommendations do not always result in increased agency drug budgets. As noted above, ONDCP uses precertification letters during the budget certification process to notify agencies of specific areas in their drug budgets that should be addressed in order to be certified. For the four agencies we reviewed in detail, this process had varied results in effecting changes in those agencies’ drug budgets. DEA’s mission is to enforce our nation’s drug laws and regulations and to bring drug traffickers to justice. In carrying out its mission, DEA is the lead federal agency responsible for enforcement of narcotics and controlled substance laws and regulations. DEA’s fiscal year 1999 drug budget request primarily supported goal number 5 of the National Drug Control Strategy— break foreign and domestic sources of supply. DEA’s primary responsibilities include investigating major interstate and international drug traffickers and violent criminal and drug gangs; coordinating and cooperating with federal, state, and local law enforcement agencies; and working on drug law enforcement programs with their counterparts in foreign countries. In fiscal year 1998, DEA’s enacted budget totaled about $1.2 billion, all of which was in drug control programs. ONDCP initially hosted a meeting in early May 1997 with senior budget officials from the principal Justice counterdrug bureaus: DEA, Federal Bureau of Investigation (FBI), Immigration and Naturalization Service, U.S. Attorneys, and the Marshals Service. ONDCP issued formal written program guidance to DOJ at the end of June 1997. Because DEA funding is 100 percent drug related, ONDCP has contact with all of DEA’s administrative and program divisions, not just the budget office. Therefore, typically there are no budgeting surprises during development of DEA’s budget. For the fiscal year 1999 drug budget, DEA was required to submit a summer agency-level budget to ONDCP for review. In mid-June 1997, DEA made its initial budget submission to DOJ. DEA subsequently submitted the same budget—albeit in different format—to ONDCP in July 1997. According to DEA officials, the departmental budget submission takes priority over the submission to ONDCP. Although ONDCP demands the same information, it must be presented in a different format. DEA’s summer budget as submitted to ONDCP totaled about $1.4 billion. According to ONDCP officials, ONDCP was satisfied with the DEA submission. However, on August 8, 1997, ONDCP issued a precertification letter to DOJ, which listed four specific program initiatives that ONDCP believed should receive additional funding in DEA’s budget: Andean Coca Reduction – To reduce South American coca leaf production through enforcement and interdiction measures that disrupt the cocaine export industry and through development programs that provide legal income alternatives and encourage the cultivation of legal crops. Port and Border Security – To provide improved security and enhanced drug interdiction along all U.S. air, land, and sea frontiers and at all ports of entry. Mexican Initiative – To reduce the flow of illicit drugs from Mexico into the United States and dismantle the organizations trafficking in drugs and money laundering. Caribbean Violent Crime and Regional Interdiction – To expand counterdrug operations targeting drug trafficking-related criminal activities and violence in the Caribbean region. No specific funding level increases were recommended for any of the four initiatives. According to ONDCP documents, although the initiatives were identified for inclusion in the fiscal year 1999 budget, ONDCP gave DOJ maximum flexibility in determining both the precise scope and funding for each proposal. In mid-August 1997, the ONDCP Director met with the Attorney General to discuss the Justice drug budget and possible increases for DEA prior to submission of DOJ’s final fall budget to OMB. The precertification letter discussed above served as an informal agenda for this meeting. As submitted to OMB and ONDCP in mid-October 1997, DEA’s fall budget submission was about $82 million lower overall than its initial summer submission. However, the total requested amounts remained higher than the previous year’s enacted amounts. DEA’s total fall budget request was about $1.3 billion. ONDCP certified the DOJ budget as adequate in late November 1997. OMB’s preliminary agency funding decisions (commonly known as “passback”) were made on November 25, 1997. ONDCP formally appealed to OMB on behalf of DEA for $30.1 million in additional funding for the DEA program initiatives previously identified in its summer precertification letter to DOJ. After the appeals process was completed, the final OMB budget passback amount approved for DEA was $68 million above OMB’s preliminary decision, including $10 million for the Caribbean initiative. DEA’s total budget request, as approved by the President and submitted to Congress, was about $1.25 billion, which represented an increase of about $55 million from the fiscal year 1998 enacted budget. The mission of the U.S. Customs Service is to ensure that all goods and persons entering and leaving the country do so in accordance with applicable laws and regulations. As part of its mission, Customs guards against smuggling and is responsible for interdicting and seizing contraband, including narcotics and illegal drugs. In carrying out this mission, Customs’ fiscal year 1999 drug budget request primarily supported goal number 4 of the National Drug Control Strategy—shield America’s air, land, and sea frontiers. In addition to inspectors and agents at over 300 ports of entry, Customs maintains aircraft, vessels, and surveillance devices to help detect and interdict illegal drugs at or before they reach our borders. In fiscal year 1998, Customs received about $606 million in funding for its drug control programs. Customs’ budget office began the fiscal year 1999 drug budget process in April 1997, when it received ONDCP’s initial drug budget guidance. For the fiscal year 1999 drug budget, Customs was required to submit a summer agency-level drug budget to ONDCP for review. Customs first submitted its budget to Treasury, after which the Customs budget office began extracting those drug budget items from the overall budget in order to prepare its drug budget—a process taking about 2 to 3 weeks. However, because Customs was late in submitting its budget to Treasury—it was due June 1, 1997, but was not delivered until July 1997—Customs also was late in preparing and submitting its drug budget to ONDCP. In an August 8, 1997, precertification letter to Treasury, ONDCP identified three specific drug program initiatives that it believed should receive additional funding in Customs’ budget: Port and Border Security, Caribbean Violent Crime and Regional Interdiction, and Mexican Initiative (as described previously). ONDCP did not request any specific funding levels in this letter. Customs’ drug budget was submitted to ONDCP on August 11, 1997. The total drug-related funding requested in this summer submission was $844 million, which represented about 40 percent of Customs’ total budget. During this time, the ONDCP Director met directly with the Treasury Secretary to discuss drug budget funding priorities and ONDCP’s specific budget recommendations. Customs’ fall drug budget was submitted to ONDCP on October 31, 1997. This budget requested $773 million for drug control, which was about $71 million lower overall than its initial summer submission. This decrease was primarily due to reductions in funding for drug control interdiction activities (goal number 4 of the National Strategy)—Customs’ primary drug control activity. However, the total requested amounts for drug control remained higher than the previous year’s enacted amounts. Customs officials told us the decrease from the summer submission was due to reductions resulting from Treasury’s departmental budget review. ONDCP certified Treasury’s budget as adequate in late November 1997. OMB’s preliminary agency funding decisions were made on November 25, 1997. ONDCP formally appealed to OMB on behalf of Customs for $160.4 million in additional funding for the Customs program initiatives previously identified in its summer precertification letter to Treasury. Nevertheless, as a result of OMB’s final passback, Customs’ drug budget was reduced further—despite the ONDCP appeals—although it provided Customs an additional $29 million for the Port and Border Security Initiative. Because OMB had approved $100 million in discretionary drug control funding (to be allocated according to ONDCP’s recommendations), ONDCP decided to allocate an additional $25 million of this funding to Customs for border security activities—nonintrusive detection equipment. According to Customs officials, this amount was in addition to $14 million that Customs had already included in the budget for three other initiatives. Customs’ total drug budget request, as approved by the President and submitted to Congress, ultimately totaled about $673 million, representing an increase of about $66 million over fiscal year 1998 enacted levels. SAMHSA is one of the key federal agencies that supports goals number 1 and 3 of the National Drug Control Strategy—primarily involving prevention and treatment of illegal drug use. SAMHSA’s mission within the nation’s health system is to improve the quality and availability of prevention, treatment, and rehabilitation services in order to reduce illness, death, disability, and cost to society, resulting from substance abuse and mental illnesses. In fiscal year 1998, SAMHSA received about $1.3 billion in funding for its drug control programs. For the fiscal year 1999 drug budget, SAMHSA was required to submit a summer agency-level drug budget to ONDCP for review. In late June 1997, SAMHSA submitted its drug budget submission totaling about $1.65 billion to ONDCP. After reviewing the SAMHSA budget, ONDCP issued a precertification letter to the Department of Health and Human Services (HHS). This letter, dated August 8, 1997, identified the following program initiatives where ONDCP wanted to see additional resources applied (although no specific funding levels were identified): Youth Drug Prevention Research – To conduct a program of research designed to improve the understanding of youth drug abuse and addiction and disseminate findings from various research sources; Youth Substance Abuse Prevention – To use findings from successful programs to develop new state incentive grant drug prevention programs, focusing on drug prevention in early childhood and among adolescents; Close the Public System Treatment Gap – To increase drug treatment capacity and outreach for chronic users and addicts, including their families; and Medications for Drug Dependence – To expand grant funding to support priority research projects associated with the development of medications and treatment protocols to prevent or reduce drug dependence and abuse. On August 15, 1997, the ONDCP Director met with the HHS Secretary to discuss the changes ONDCP wanted to see (as spelled out in its precertification letter) in the HHS fall budget submission to OMB and ONDCP. The initiative to Close the Public System Treatment Gap was a major drug policy priority for ONDCP and was a primary issue of disagreement among HHS, SAMHSA, and ONDCP during the budget review process. HHS believed SAMHSA should move away from directly funding service projects and instead focus on a research-based approach. SAMHSA supported expansion of targeted treatment capacity, rather than generally expanding its block grant program. ONDCP strongly supported expanding treatment capacity by increasing SAMHSA’s block grant program. In September 1997, HHS submitted its fall budget to OMB and ONDCP. In this submission, SAMHSA’s drug budget totaled about $1.4 billion. The changes that ONDCP wanted to see in SAMHSA’s budget were not in the budget. After reviewing the fall submission, ONDCP drafted a proposed letter (dated October 9, 1997) to decertify SAMHSA’s drug budget. In this letter, ONDCP specifically recommended that HHS’ budget submission include at least $400 million in additional funding for the Close the Treatment Gap initiative and $50 million additional for the Youth Substance Abuse Prevention initiative. During October 1997, ONDCP, SAMHSA, and HHS budget officials met twice to try and resolve the treatment gap issue. Although they supported the idea of closing the gap in drug treatment, they disagreed on the size of the treatment gap, a factor that could significantly affect the estimated cost to close the treatment gap as well as the most effective way to fund the treatment gap initiative. According to ONDCP, SAMHSA did not quantify the treatment gap in its initial drug budget submission, and ONDCP believed the gap was larger than could be addressed by the amount that SAMHSA had requested. Based on discussions at these meetings, HHS agreed to amend its fall budget request to address ONDCP’s concerns. In November 1997, HHS submitted an amended budget to OMB. According to the HHS Secretary’s letter to the OMB Director, an additional $75 million was included in the budget to fund treatment efforts in cities with serious drug problems, while another $35 million would provide funds to enhance existing state data efforts and to improve treatment and/or its delivery to vulnerable populations. An additional $115 million ($82 million drug related) was also included in the amended budget to increase the Substance Abuse Block Grant, for a total drug budget increase of about $192 million. According to SAMHSA officials, making a resubmission to OMB was an unusual occurrence and represented a significant accommodation by HHS to the wishes of ONDCP. In late November 1997, approximately 1 week after HHS amended its OMB budget submission, the department’s drug budget was certified by ONDCP. OMB’s preliminary agency funding decisions were made on November 25, 1997. Subsequently, ONDCP formally appealed to OMB for $200 million additional to fund the treatment gap initiative, consistent with the previous recommendation by ONDCP to HHS. OMB’s final budget passback included the additional funding to help expand drug treatment efforts that had been included in SAMHSA’s amended fall budget submission. However, at the same time, OMB eliminated all increases for SAMHSA mental health programs and cut substance abuse prevention and treatment funding by $75.6 million below 1998 levels. SAMHSA’s total drug budget request, as approved by the President and submitted to Congress, totaled about $1.36 billion, an increase of about $40 million over fiscal year 1998’s enacted budget. DOD’s fiscal year 1999 counterdrug budget request included funding for all 5 goals of the National Drug Control Strategy, although it primarily supported goals number 4 and 5—interdiction and supply reduction of illegal drugs. DOD’s role, among other things, is to detect and monitor aerial and maritime transit of illegal drugs headed to the United States. DOD also supports foreign intelligence collection and analysis programs that aid source and transit countries to arrest drug kingpins and dismantle their organizations. In fiscal year 1998, DOD’s enacted budget included about $848 million in funding for counterdrug programs. For the fiscal year 1999 budget cycle, ONDCP required DOD to submit its counterdrug budget to ONDCP in the fall, at the same time the department’s overall budget was submitted to OMB. To provide DOD additional specific program guidance, on August 8, 1997, the ONDCP Director wrote to the Secretary of Defense requesting that the DOD drug control budget, in order to be certified as adequate, include funding for two specific program initiatives: Andean Coca Reduction and Mexican Initiative. In an August 26, 1997, memorandum from DOD to ONDCP, the Principal Director for Drug Enforcement Policy and Support said that because the fiscal year 1998 appropriation and authorization acts had not yet passed, the requested increases could not be incorporated into the DOD counterdrug budget submission at that time. According to DOD officials, DOD did not have congressional authority to carry out all the suggested programs that ONDCP had highlighted in its August 8, 1997, letter. To give ONDCP advance notice on what DOD’s counterdrug budget submission would look like, the memorandum included an attachment briefly summarizing DOD’s preliminary fiscal year 1999 counterdrug budget, which at that time totaled $809 million. On September 16, 1997, DOD submitted its preliminary counterdrug budget to ONDCP as required. In the budget transmittal letter, the Principal Director for Drug Enforcement Policy and Support indicated that additional counterdrug funding may be provided in DOD’s overall budget, after it was finalized with OMB. In a September 24, 1997, letter to the Secretary, the ONDCP Director replied that DOD’s counterdrug program appeared to be systematically under funded. The Director asked that DOD give careful consideration to adding $141 million in fiscal year 1999 enhancements—for a total counterdrug budget of $950 million—to support four counterdrug program initiatives: (1) Andean Coca Reduction ($75 million), (2) Mexican Initiative ($24 million), (3) Caribbean Violent Crime and Regional Interdiction ($12 million), and (4) National Guard Counterdrug Operations ($30 million). During this time, the ONDCP Director, Secretary of Defense, and Deputy Secretary of Defense met to further discuss ONDCP’s recommended increases. On November 6, 1997, the ONDCP Director sent a letter to the Secretary of Defense notifying him that ONDCP had determined that DOD’s preliminary fiscal year 1999 counterdrug budget could not be certified. Again the letter indicated that an additional $141 million was needed to correct deficiencies in the current budget. Also on November 6, 1997, the Secretary of Defense replied that the amounts requested by ONDCP were excessive. For example, in response to ONDCP’s recommendation for additional spending on the Mexican Initiative, the Secretary stated that DOD already planned to spend $12 million in Mexico in fiscal year 1999 and that it would be a logistical challenge to increase this amount. Additionally, the Secretary stated, DOD could not increase spending on the Andean Initiative until enactment of additional DOD authority by Congress. On November 7, 1997, the ONDCP Director reiterated to the Secretary of Defense that ONDCP would not certify the DOD counterdrug budget and sent similar letters to key administration and congressional officials. At the time of OMB’s preliminary budget passback decisions, DOD’s overall budget submission had not yet been finalized. Based on DOD’s preliminary counterdrug budget submission, ONDCP formally appealed to OMB for the $141 million in increased funding that ONDCP had previously identified when it decertified DOD’s counterdrug budget. In mid-December 1997, DOD announced that it would support additional resources in its budget for counterdrug programs. ONDCP supported DOD’s decision, but in a December 16, 1997, letter to DOD, ONDCP continued to request additional funding for the National Guard Counterdrug Program. As finally agreed to with OMB in late December 1997, DOD’s counterdrug budget totaled $883 million, including about $75 million in increased funding for three of the four program initiatives previously recommended by ONDCP.DOD’s total fiscal year 1999 counterdrug budget as approved by the President and submitted to Congress was $883 million. According to DOD officials, the DOD budget cycle and that of ONDCP do not align very well. Normally, DOD does not finalize its overall departmental budget (internally or with OMB) until late December. Throughout the fall, and continuing into OMB’s passback season, DOD is constantly refining its budget submission. Thus, DOD’s counterdrug budget is not always available in final form when ONDCP would like it for certification purposes. According to ONDCP officials, in order to influence the DOD budget process, ONDCP usually receives DOD’s draft drug budget submission in late August or early September. DOD’s final budget submission is generally completed too late in the process for ONDCP to propose changes. Because of this practice, DOD’s drug budget is typically reviewed for certification purposes prior to other drug control agencies’ budgets in the fall. Based on the preliminary DOD budget submitted in September 1997 and further discussions with DOD officials, ONDCP believed that the department was not going to fund the initiatives in question at a level that would be adequate; and thus, it decertified DOD’s drug budget. ONDCP could not wait until December 1997—when DOD’s final budget was completed— because it had to begin preparing for discussions with OMB and the President about the overall national drug budget. Further, there was no guarantee that DOD’s final budget—when it was submitted—would be sufficient to merit certification. Although it is difficult to isolate the specific effect that ONDCP’s input had on the drug budget as enacted by Congress, we can identify changes in the drug budget that occurred during the appropriations process. We can also compare those changes with ONDCP’s efforts to monitor development of the National Drug Control Budget during the budget and appropriations process. After the President’s proposed budget was submitted to Congress in February 1998, ONDCP continued to monitor the status of the drug budget during the congressional appropriations process. In several instances, ONDCP corresponded directly with members on the House and Senate Appropriations Subcommittees that were responsible for overseeing drug control agencies’ budgets. The ONDCP Director also testified on several occasions, in conjunction with the congressional appropriations hearings process, about the national drug control program and the need for additional funding in specific areas. The national drug control budget as enacted by Congress represented an increase of about $816 million over the President’s fiscal year 1999 drug budget request. This increase can be attributed, in large part, to the October 1998 enactment of the Western Hemisphere Drug Elimination Act and Emergency Supplemental Appropriations for fiscal year 1999. The Western Hemisphere Drug Elimination Act authorized, among other things, increased funding for drug interdiction and supply reduction activities over the next 3 fiscal years, 1999 to 2001. The Emergency Supplemental Appropriations provided increased counterdrug funding for fiscal year 1999 in the amount of $844 million. While the Western Hemisphere Drug Elimination Act contains no explicit statement about the relevance of ONDCP’s budget certification process to its passage, one of its purposes was to state Congress’ desire that DOD give higher priority for counterdrug activity—a position also stated by ONDCP when it decertified DOD’s fiscal year 1999 drug budget. In addition, the act authorized increased drug control funding for supply reduction activities in source countries and Caribbean transit zones, two activities for which ONDCP previously recommended budget increases in its precertification letters to Justice and Treasury. Despite Congress’ intentions to increase the national drug budget, the ONDCP Director and the administration opposed passage of this act, stating that, among other things, it was an attempt by Congress to micromanage national drug strategy. The Emergency Supplemental Appropriations for fiscal year 1999 provided significant supplemental funding for the national drug control budget, particularly in areas in which ONDCP had recommended increases during the budget certification process. For example, DEA, whose budget ONDCP had previously certified as adequate, received a small additional increase in counterdrug funding—$10 million. This was primarily for drug control activities in source countries in North and South America, budget activities that ONDCP had previously recommended be increased in its precertification letter to DOJ. Customs, whose budget ONDCP had previously certified as adequate, nevertheless received a significant additional increase—$267 million—in counterdrug funding. Most of this amount was for drug surveillance and interdiction in transit zones, as well as enhanced port and border inspection capabilities. ONDCP had recommended increases in each of these areas in its precertification letter to Treasury and also later recommended additional funding for inspection technology. DOD received an additional $42 million in counterdrug funding, primarily for port and border security and international supply reduction activities. When added to its regular counterdrug appropriation of $895 million, DOD’s total counterdrug funding for fiscal year 1999 ($937 million) was nearly equal to what ONDCP had originally recommended ($950 million). Although SAMHSA did not receive additional funding under the Emergency Supplemental Appropriations for fiscal year 1999, SAMHSA’s drug budget may have benefited from ONDCP’s input into the appropriations process. ONDCP had previously requested OMB to increase SAMHSA’s drug treatment funding by $200 million over its request— consistent with the total amount ($400 million) ONDCP had originally recommended during SAMHSA’s budget certification review. In addition, in subsequent letters to members of the congressional appropriations committees, the ONDCP Director identified SAMHSA’s Substance Abuse Block Grant as a key initiative needing additional resources for fiscal year 1999. Congress eventually provided an additional $75 million over and above the President’s requested funding for SAMHSA’s Substance Abuse Block Grant program. Table 2 summarizes how the four agency drug budgets changed at each stage in the fiscal year 1999 budget process. ONDCP has two primary tools to help monitor the extent to which drug control agencies and programs are achieving intended results. First, as part of its authorizing legislation, ONDCP has general authority to monitor and, if necessary, direct how drug control agencies should manage their individual budgets to implement the National Drug Control Program. However, because of statutory restrictions on how ONDCP can exercise this authority and ONDCP’s desire to maintain positive interagency relationships, these authorities have little direct impact on ONDCP’s management of the National Drug Control Program. In practice, ONDCP prefers not to intervene in the daily operations of individual drug control agencies but rather to provide an overall strategic and tactical framework that lets the agencies work out the operational details. Second, to carry out its responsibility for annually assessing the effectiveness of the federal government’s National Drug Control Program, ONDCP has recently implemented the Performance Measures of Effectiveness (PME) system. Established in February 1998 through cooperative efforts between ONDCP and the drug control community, the PME system provides a framework for assessing the effectiveness of the National Drug Control Strategy by utilizing goals, objectives, and measurable effects, known as performance targets. This system is expected to allow ONDCP, the agencies, and Congress to better manage the programs and resources associated with the nation’s drug control efforts. The PME system is a work in progress, and questions remain that could affect the system’s ultimate success. ONDCP has authority to monitor and, if necessary, direct drug control agencies in how they manage their budgets. For example, as part of its general responsibility for overseeing and coordinating implementation of the National Drug Control Strategy, ONDCP has the authority to recommend to the President changes in the organization, management, budgets, and personnel of federal drug control agencies. ONDCP’s authority also extends to agencies’ appropriated funds, whereby ONDCP must approve, prior to submission to Congress, any request by a federal drug control agency to reprogram or transfer any amount of appropriated drug control funds greater than $5 million; may, upon advance notification to Congress and with the concurrence of the affected agencies, transfer appropriated drug control funds from one federal drug control agency to another; and may issue funds control notices that direct how federal drug control agencies may obligate appropriated drug control funds. In addition, as part of the budget certification process described previously, ONDCP can direct federal drug control agencies to make changes to their annual drug budget submissions so that the budgets are consistent with the specific drug control initiatives and priorities identified in the National Strategy. ONDCP officials noted that its reprogramming authority, although not specifically aimed at managing agency performance, provides a means by which ONDCP can help ensure, in advance, that any significant budgetary changes will not negatively affect the agency’s ability to meet the goals and objectives of the National Strategy. Normally, the officials noted, the agency has discussed such requests in advance with ONDCP, and the approval is routine. According to ONDCP officials, on two occasions in fiscal year 1999, ONDCP has exercised its authority to approve agency drug budget reprogramming requests. The State Department requested reprogramming of $19.5 million, most of which was to realign funding between existing counterdrug programs; and DOD requested reprogramming to shift an additional $45 million into its Counterdrug Transfer Account. ONDCP officials said they could not recall disapproving of an agency reprogramming request. ONDCP has not used its authority to directly transfer drug control funding from one agency or department to another. According to ONDCP officials, this authority is not likely to be used in the future, despite the fact that Congress recently increased the amount eligible for transfer from 2 percent to 3 percent of the transferring agency’s drug budget. Under the current law, ONDCP must not only report any proposed transfer to Congress in advance of taking any action, but also it may exercise the transfer authority only with the consent of the head of each affected agency. Because of these checks and balances, this new authority in effect provides ONDCP no more power to influence agency budgets than it already had under existing authorities. ONDCP officials told us that, in any event, they prefer to effect changes in agency drug budgets through negotiation, interagency working relationships, and the normal budget process. The officials noted that a decision on the part of ONDCP to recommend the transfer of agency funding under this authority could jeopardize these interagency relationships. According to ONDCP officials, funds control notice authority provides ONDCP the flexibility to direct drug control spending on particular projects, activities, functions, or object classes; and the ability to ensure that a project or activity critical to the National Drug Control Strategy is funded. They said this authority would be used by ONDCP on an exception basis to specify the timing or amount of spending related to certain appropriations. Funds control notices could also be issued to keep a drug control agency from spending funding on a particular project or activity. The existence of this authority tends to make ONDCP budget recommendations and requests to drug control agencies more persuasive. To date, ONDCP has not had to issue a funds control notice. As noted previously, ONDCP does use the budget certification process in an effort to effect changes in drug control agencies’ budgets. Although not always successful—in the case of DOD’s fiscal year 1999 budget, for example—the certification process provides a mechanism by which ONDCP can review agencies’ drug budgets for consistency with the National Strategy and make recommendations that bring these budgets into line with the strategy’s current goals and objectives. To better evaluate the effectiveness of federal drug control efforts, in February 1998 ONDCP established its PME system—a system of goals, objectives, and targets designed to implement the National Strategy and measure the effectiveness of the nation’s drug control efforts. The Anti- Drug Abuse Act of 1988 required ONDCP to include in each year’s National Strategy an evaluation of the effectiveness of federal drug control efforts during the previous year. The PME system was developed in response to Executive Order 12880 (issued November 16, 1993) and additional statutory language included in the Violent Crime Control and Law Enforcement Act of 1994 (P.L. 103-322), which required a more detailed assessment of federal drug control efforts. As stated in ONDCP’s 1998 PME report, the PME system is designed to (1) assess the effectiveness of the National Drug Control Strategy, (2) provide the drug control community with critical information on what needs to be done to refine policy and programmatic direction, and (3) assist with drug program budget management at all levels. The 1998 report goes on to state, however, that the PME system was not used to construct the national drug control budget. Rather, the performance targets were developed separately from the budget process. Eventually, the PME system is meant to enable the drug control community to assess and select among various options for achieving the performance targets—including budget/resource management tools; shared responsibility by federal, state, local, and private organizations; and the system of laws and regulations. ONDCP began its PME effort in 1995, when ONDCP initiated an interagency effort to draft performance targets and measures to be included in the 1996 National Strategy. Working groups—consisting of agency staff, line managers and others knowledgeable about drug control issues and programs—were established to develop an acceptable measurement plan and specific performance targets. By 1996, this effort evolved further as the working groups began developing the performance measurement framework that would become the PME system. The working groups reconvened in February 1997 with some new members. The final recommendations made by the working groups were incorporated by ONDCP into the PME report that was issued in February 1998. After the initial PME report was issued in 1998, the working groups reconvened to develop specific action plans identifying the responsibilities of each individual agency in working towards the PME performance targets. According to ONDCP officials, the working groups were encouraged to develop the action plans, without regard to budgetary constraints, to identify the best approaches to operationalizing agency responsibilities. ONDCP intends to publish finalized action plans in subsequent PME reports, after they have been cleared at the department level. The working groups also focused on other refinements to the PME system, including defining causal relationships between agency activities and desired impacts for each target; identifying annual targets that correspond to achieving the 5- and 10-year outcomes; and developing plans for addressing gaps in performance measurement data. According to ONDCP officials, ONDCP intends to report on the results and implementation of the PME system each February, in conjunction with the publication of the National Drug Budget Summary. The first annual status report was issued in February 1999. Although too soon to make an assessment of the National Strategy’s effectiveness, the 1999 report described accomplishments during the prior year—including six milestone performance targets that were achieved. The report also described ONDCP’s plans for additional development of the PME system during 1999. For example, ONDCP plans to reach out to state and local entities that have antidrug interests and include their input in the 2000 PME report. ONDCP believes this is an important next step in implementing and evaluating the National Strategy, since it is meant to be a national, not strictly federal, document. The 1998 National Drug Control Strategy identified 5 strategic goals and 32 objectives as part of a comprehensive effort to reduce drug use (demand), decrease drug availability (supply), and reduce the adverse consequences of drug use. The strategy’s five goals are as follows: Goal 1 – Prevent drug use among America’s youth; Goal 2 – Increase the safety of America’s citizens; Goal 3 – Reduce the health and social costs of drug use; Goal 4 – Shield America’s air, land, and sea frontiers; and Goal 5 – Break foreign and domestic sources of supply. The goals help to define the major initiatives that must be pursued to reduce drug use, availability, and consequences. Each goal includes one or more objectives, which help to measure progress towards the goal and may be modified as counterdrug efforts succeed or new challenges emerge. For example, goal number 4 includes the following objectives: Reduce drug flow in transit and arrival zones, Improve coordination among U.S. drug control agencies, Improve coordination with drug source and transit nations, and Conduct research and develop technology to deter drug flow into the United States. The PME system takes this approach a step further by linking the strategy’s goals and objectives to 94 specific targets, while at the same time identifying measures (i.e., data variables or events) used to track progress towards these targets. As illustrated in figure 1, the PME’s 12 “impact” targets define the desired outcomes for the National Strategy’s five goals. The other 82 “performance” targets define progress towards the National Strategy’s 32 supporting objectives. While impact targets are to be used to assess whether the National Strategy is successful overall, the performance targets are to offer additional information on what needs to be done to refine policy or programmatic directions. According to ONDCP officials, the concept or logic model underlying the PME system is that the goals, objectives, and targets cascade down to the various federal drug control agencies responsible for reporting on ONDCP’s performance and impact targets that have been established for 2002 and 2007. To measure success in meeting the National Strategy’s overall goals and objectives, ONDCP plans to compile data provided by the drug control agencies on those performance targets for which they have supporting responsibilities. Each of the goals in the strategy is associated with several impact targets, objectives, performance targets, and measures. For example, goal number 5 of the strategy is to break foreign and domestic drug sources of supply. As described in the PME system, meeting this goal depends on achieving six objectives, each of which addresses some aspect of foreign or domestic supply. Each objective further consists of two to four performance targets, the measurement of which is to determine whether the objectives have been achieved. To determine whether the overall goal has been met, two additional impact targets—one dealing with foreign drug supply and one dealing with domestic drug production—are to measure outcomes over 5- and 10-year periods. Focusing on foreign supply, the impact target measuring achievement of this aspect of goal number 5 is as follows: Source zone outflow — By 2002, reduce the rate of outflow of illicit drugs from the source zone by 15 percent, as compared with the 1996 base year. By 2007, reduce outflow rate by a total of 30 percent measured against the base year. Goal number 5, objective 1 is to reduce production of specific illegal drugs which, if achieved, would lead to a reduction in source zone outflow. This objective contains four specific performance targets for reductions in illicit coca, opium poppies, marijuana, and other illegal drugs. An example of one of these performance targets is for illicit coca: Illicit coca – By 2002, reduce the worldwide net cultivation of coca destined for illicit cocaine production by at least 20 percent, as compared with the 1996 base year. By 2007, reduce net cultivation by at least 40 percent compared with the base year. Progress towards this target is expected to be measured based on coca cultivation as expressed in hectares under cultivation and metric ton equivalent of potential production capacity, assessed annually, on a worldwide basis. In its 1999 PME report, ONDCP identified an existing source for this measurement data—the International Narcotics Control Strategy Report (issued annually by the Department of State)—and the Central Intelligence Agency (CIA) was identified as the primary reporting agency. According to the underlying assumptions behind the PME system, if all of the performance targets associated with objective 1 are reached in 2002 and 2007, then the objective—reduced drug production—should be achieved. However, achieving this objective does not necessarily mean that the associated impact target—reduced source zone outflow—for goal number 5 will be met. Rather, that outcome is contingent not only on achieving objective 1, but it also depends on the results achieved towards the other targets and objectives associated with the source zone outflow impact target. In the above example, although the CIA is the primary reporting agency, five other drug control departments and agencies—DEA, DOD, State, FBI, and U.S. Agency for International Development—also have responsibility for meeting the illicit coca performance target. According to ONDCP, if the performance target is not reached, further analysis will be necessary to clearly establish why and identify the appropriate corrective actions— which might include changes in agency funding, agency resources or assets, agency responsibility, or in the target itself. ONDCP expects the PME system to ultimately bring accountability to the nation’s drug control efforts. As agencies collect and report performance measurement data to ONDCP, ONDCP expects the PME system to help identify which drug control programs are contributing to the achievement of desired outcomes. ONDCP also expects that, after the PME interagency action plans are finalized, they will be fully reflected in agency budget submissions and performance plans submitted under the Government Performance and Results Act (Results Act). According to the 1999 PME report, agencies will be asked to link responsibilities within these action plans to their budget submissions, and programs will need to be linked to the targets to which they contribute. ONDCP and drug control agency officials raised several issues that they said need to be addressed in order for the PME system to be successfully implemented. First, for many of the performance targets identified, no data currently exist to measure progress towards the target. According to the February 1999 PME status report, about one-third of the performance targets were not currently measurable, with goal number 2 of the National Strategy—reduce drug-related crime and violence—having the largest proportion of unmeasurable targets (10 of 17). For example, one target for goal number 2 is to reduce the rate of violent crimes and crimes against property that are associated with illegal drugs. However, data currently collected for these types of crimes (through FBI Uniform Crime Reports) are not broken out by drug use involvement. ONDCP is taking steps to address these types of data limitations. For example, ONDCP’s Subcommittee on Data, Research, and Interagency Coordination has recently completed a federal drug-related data needs assessment and an inventory of federal drug-related data sources. ONDCP has also included $3.3 million in its fiscal year 2000 budget request to fund data development activities. ONDCP expects that all of the targets identified in the 1999 PME report will be measurable within 3 years. Second, it is not yet clear how ONDCP will use the PME information during its budget certification process. Agency officials we talked to raised concerns that PME targets may become a way to judge the performance of individual drug control agencies. ONDCP officials have stated, however, that the PME data will not be used in this manner, but rather these data will be used to assess the effectiveness of the National Strategy and the PME framework itself. During the fiscal year 1999 drug budget certification process, ONDCP did not use performance data as criteria, but rather it focused on overall dollar amounts and specific program initiatives that agencies were expected to address in their budgets. For fiscal year 2000, ONDCP’s budget guidance requested that drug control agencies format their budget submissions so that proposed spending was broken out by performance target. However, ONDCP officials indicated that they have not yet decided how PME information will be integrated into the certification process in future budget cycles. They believe that identifying a direct connection between the funding request and the associated PME performance target will allow ONDCP to make more informed recommendations to the agencies about where they should focus their drug control funds. ONDCP expects that a decision on this issue will likely be made by calendar year 2001, the point at which ONDCP estimates data will be available to measure progress towards all of the previously established performance targets. “Agencies are required to track their own performance through their Government Performance and Results Act plans, which should include aspects of their own specific drug control missions. The plans should be consistent with the Strategy and the PME system.” For the four selected departments and agencies we reviewed, we compared performance measures from their 1999 Results Act performance plans with performance measures developed under the PME system. We found that the Results Act plans may be inconsistent with or contain less specific information than is presented in the PME system. For example: DEA’s fiscal year 1999 annual performance plan contains strategic goals and strategies for achieving those goals that parallel goals and objectives described in the PME system. For example, DEA’s plan contains a strategic goal of disrupting and dismantling drug trafficking organizations, which corresponds to two similar PME objectives. The PME performance targets for these two objectives are very specific about achieving percentage increases in numbers of organizations disrupted or dismantled over 5- and 10-year periods. DEA’s corresponding annual goals—increasing arrests, removals, and seizures; increasing foreign operations; and disrupting drug traffickers—are less specific and output-oriented, although they are expected to result in the outcomes of reduced trafficking capability, disruption and dismantling of trafficking organizations, and enhanced international coordination and intelligence collection. In addition, DEA’s plan lacks specific performance targets upon which to gauge progress towards these annual goals or outcomes. Treasury’s fiscal year 1999 annual performance plan contains the goal to reduce trafficking, smuggling, and use of illicit drugs. Customs is the primary Treasury agency responsible for achieving this goal, which roughly corresponds to PME goal number 4, objective 1: conduct flexible operations to detect, disrupt, deter, and seize illegal drugs in transit to the United States at U.S. borders. Both the annual performance plan and PME establish targets for specific drugs, although (1) the data sources identified to measure progress towards the targets are slightly different and (2) Customs’ targets focus on single-agency outputs (increased drugs seized) while ONDCP’s focus on multiagency outcomes (reduced flow of drugs). SAMHSA’s fiscal year 1999 annual performance plan contains an initiative that directly responds to PME goal number 1, objective 2—pursue a vigorous public advertising and communication program dealing with the dangers of illegal drugs. For both, the expected outcome is to increase the percentage of youth who consider illegal drugs to be harmful. Although similar overall, differences can be seen in the performance targets identified and data sources used to measure progress towards the targets. PME has identified as a target an increase in 20 percent, by 2002, in the percentage of youth perceiving great risk in using marijuana; while SAMHSA’s 2002 target is to reduce, by 25 percent, past month usage of marijuana by youths. PME’s data source measures survey responses of 12th graders; while SAMHSA’s separate data source measures survey responses of 12- to 17-year olds. DOD’s fiscal year 1999 performance plan does not specifically address goals for counterdrug activities, which make up only a small fraction of DOD’s overall budget. The plan did identify a quantitative performance measure for drug interdiction and counterdrug activity—tons of cocaine seized—but did not identify a baseline to measure against or a performance target to be achieved. According to DOD officials, the Office of Drug Enforcement Policy and Support has linked its counterdrug planning, programming, and budgeting system to the goals, targets, and measures in ONDCP’s PME system. The Office of Drug Enforcement Policy and Support tracks the relative performance of DOD systems employed in counterdrug efforts. The officials said these statistics are then used to evaluate overall program effectiveness and support DOD budgetary decisions. Agency officials have told us that currently more interest lies in performance measurement and reporting required by the Results Act, rather than the PME system. As a result, they are primarily focused on responding to the concerns of their departments and congressional oversight and appropriations committees, with respect to the Results Act. However, as part of ONDCP’s 1998 reauthorization legislation, Congress strongly endorsed the national drug control performance measurement system. Further, ONDCP is specifically required to design the system so that it (1) monitors consistency between the goals and objectives of drug control agencies and (2) ensures that their goals and budgets support and are fully consistent with the National Drug Control Strategy. As stated in the 1999 PME report, ONDCP expects that, as PME working groups develop the interagency action plans, elements of the action plans will eventually be fully reflected in agency budgets and Government Performance and Results Act plans. ONDCP’s fiscal year 1999 budget certification process appears consistent with the requirements of the Anti-Drug Abuse Act of 1988. Certification allows ONDCP to influence agency drug budgets early in the budget development process and bring any drug budget shortfalls to the attention of budget decisionmakers. Because certification is only the first phase of the drug budget development process, funding issues or disagreements that cannot be resolved during the certification phase can still be addressed through ONDCP’s continuing input into the congressional appropriations process. ONDCP’s PME system appears to provide a framework for bringing greater accountability to the nation’s drug control efforts. In light of Congress’ recent interest in measuring the effectiveness of the nation’s drug control efforts, ONDCP’s approach to fully implement the PME system by (1) addressing existing limitations in performance measurement data and (2) examining ways to integrate PME performance data into the budget certification process seems appropriate. Because the PME system has been just recently implemented, additional assessment would be necessary to determine whether the system is fully functional, is achieving its designed purpose, and has been integrated with department and agency processes required under the Results Act. We provided a draft of this report to the Director of ONDCP; the Attorney General; and the Secretaries of the Treasury, HHS, and Defense for comment. We received oral and written comments during the period of April 14 to 27, 1999, from the Director, ONDCP; the Chief Inspector, Inspection Division, DEA; the Assistant Commissioner, Office of Investigations, Customs Service; the Inspector General, HHS; and the Deputy Assistant Secretary for Drug Enforcement Policy and Support, DOD. Their comments and our responses are summarized below. ONDCP concurred with the report and provided technical clarifications, which we have incorporated in the report where appropriate. In its comments on the budget certification process and the PME system, ONDCP noted that: The budget authorities we reviewed have been renewed as part of the Office of National Drug Control Policy Reauthorization Act of 1998. ONDCP will continue its efforts to guide the development of agency drug control programs through the annual drug budget certification process. The PME system will be refined during fiscal year 1999, as ONDCP addresses some of the important issues raised in this report. In particular, ONDCP intends to make significant progress this year to better link the PME system with the drug budget. DEA expressed concern about our suggestion that DEA’s 1999 performance plan was less specific and output-oriented, and lacked measurable targets upon which to gauge progress towards the annual goals of the Performance Measures of Effectiveness. DEA stated that its performance plan directly supports DOJ’s Strategic Plan, which itself was designed to meet the goals and objectives of the National Drug Control Strategy. We agree that DEA’s 1999 performance plan contains strategic goals and strategies for achieving those goals that parallel goals and objectives described in the PME system, and we have stated so in this report. However, DEA’s annual goals as stated in the plan are less specific than those in the PME system, the goals do not include specific performance targets for either outputs or outcomes, and the performance plan does not identify performance measures that will be used to track progress towards the goals. This approach is inconsistent with the Results Act, which requires performance plans to contain objective, quantifiable, and measurable performance goals, as well as performance indicators to measure outputs and outcomes. Customs, HHS, and DOD concurred with the report and also provided technical comments and clarifications, which have been incorporated in the report where appropriate. We are sending copies of this report to Representative Patsy T. Mink, Ranking Minority Member, House Subcommittee on Criminal Justice, Drug Policy, and Human Resources; and to Senator Strom Thurmond, Chairman, and Senator Charles E. Schumer, Ranking Minority Member, Senate Subcommittee on Criminal Justice Oversight. We are also sending copies of this report to Barry R. McCaffrey, Director, Office of National Drug Control Policy; The Honorable William S. Cohen, Secretary of Defense; The Honorable Donna E. Shalala, Secretary of Health and Human Services; The Honorable Janet Reno, Attorney General; The Honorable Robert E. Rubin, Secretary of the Treasury; and The Honorable Jacob Lew, Director, Office of Management and Budget. This report also will be made available to others upon request. Major contributors are listed in appendix II. If you have any questions, please contact me on (202) 512-8777. The former Chairman of the House Government Reform and Oversight Subcommittee on National Security, International Affairs, and Criminal Justice asked us to examine the role of the Office of National Drug Control Policy (ONDCP) in shaping the national drug control budget. In discussions with the Subcommittee staff, we specifically agreed to assess whether the process ONDCP followed to certify federal agencies’ drug control budgets for fiscal year 1999 was consistent with statutory requirements and describe the system ONDCP has developed to assess the extent to which drug control agencies and programs achieve intended results. Our work on the budget certification process focused specifically on the fiscal year 1999 drug budget cycle. For that year, we documented the certification process followed by ONDCP and verified that certification letters were issued for all drug control departments or agencies identified by ONDCP as requiring certification. Regarding agency and program results, our work focused on ONDCP’s recently established system— Performance Measures of Effectiveness—for assessing the effectiveness of the National Drug Control Strategy. In addressing the objectives, we did our work primarily at ONDCP headquarters in Washington, D.C. To obtain additional perspectives about both the drug budget certification process and ONDCP’s Performance Measures of Effectiveness, we also reviewed the following four departments and agencies in more detail: (1) the Drug Enforcement Administration (within the Department of Justice), (2) U.S. Customs Service (within the Treasury Department), (3) Substance Abuse and Mental Health Services Administration (within the Department of Health and Human Services), and (4) the Department of Defense. We selected these four departments based on the requester’s interest; we chose the specific agencies because they are key component drug control agencies within those departments. To describe the process by which ONDCP certifies federal agencies’ drug control budgets, we focused on the fiscal year 1999 budget cycle. We interviewed officials in ONDCP’s Programs, Budget, Research, and Evaluation Division, as well as officials from ONDCP’s operational divisions, including Demand Reduction, Supply Reduction, and State and Local Affairs. We reviewed the legislation governing the ONDCP budget certification process, including changes resulting from ONDCP’s 1998 Reauthorization Act. We further obtained and reviewed all relevant internal and interagency correspondence relating to the certification process, including ONDCP guidance and policy, agency budget submissions, and ONDCP certification letters. In addition to these reviews, we also interviewed budget and program officials at DEA, Customs, DOD, and SAMHSA. To describe the system ONDCP has developed to assess the extent to which drug control agencies and programs achieve intended results, we interviewed officials from ONDCP and the other federal drug control agencies noted above and reviewed relevant documents provided by these agencies. We reviewed ONDCP’s National Drug Control Strategy (1997, 1998, and 1999) and ONDCP reports on Performance Measures of Effectiveness (1998 and 1999). We also reviewed Government Performance and Results Act plans for the agencies noted above and compared the performance measures identified in their fiscal year 1999 performance plans with those measures included in ONDCP’s Performance Measures of Effectiveness system. Philip D. Caramia, Evaluator-in-Charge The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touch-tone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the role of the Office of National Drug Control Policy (ONDCP) in shaping the national drug control budget that the President ultimately proposes to Congress to implement the National Drug Control Strategy, focusing on: (1) whether the process ONDCP followed to certify federal agencies' drug control budgets for fiscal year (FY) 1999 was consistent with statutory requirements; and (2) the system ONDCP has developed to assess the extent to which drug control agencies and programs achieve intended results. GAO noted that: (1) the process ONDCP used to certify FY 1999 drug budgets was generally consistent with the requirements of the Anti-Drug Abuse Act of 1988; (2) ONDCP provided budget guidance to agencies and reviewed some agencies' preliminary budgets in the summer and others in the fall; (3) based on its budget reviews, ONDCP notified agencies of recommended changes to incorporate into their final budgets that were submitted to the President for approval; (4) ONDCP reviewed budgets of 14 drug control agencies specifically for certification to determine whether they were adequate to support the goals and objectives of the National Drug Control Strategy; (5) ONDCP certified all but the Department of Defense (DOD) budget; (6) DOD was not certified because DOD and ONDCP could not agree on funding levels for certain drug program initiatives; (7) later, however, DOD's budget was significantly increased following ONDCP's appeals to the Office of Management and Budget and the President; (8) ONDCP continued to monitor development of the national drug control budget during the remaining budget and congressional appropriations process; (9) to assess the extent to which agencies and programs achieve intended results, ONDCP has initiated a system known as Performance Measures of Effectiveness--a long-term effort designed to assess the effectiveness of the nation's drug control efforts; (10) although this system represents a blueprint for the first accountability in the area of drug policy, some questions remain about: (a) the availability of adequate data to measure performance; (b) how the system is to interface with the drug budget process; and (c) how agencies will link the performance expected of them by the National Strategy with the performance goals they prepare in response to the Government Performance and Results Act; and (11) ONDCP plans to continually monitor the system's operation to ensure that it is fully functional and achieving its designed purpose. |
Prior to the Bayh-Dole Act, the government generally retained ownership of federally funded inventions regardless of whether the research was performed in federal laboratories, at universities, or by individual companies, and only 5 percent of the patents on these inventions were ever used in the private sector, according to a Congressional Research Service report. The Bayh-Dole Act allows nonprofits, small businesses, and universities to retain ownership of federally funded inventions to promote the utilization of inventions created through federal research and development programs, and to provide an incentive for contractors to commercialize federally funded inventions for sale in the marketplace. The Bayh-Dole Act applies to universities, nonprofit organizations, and small businesses that receive federal research funding. Federal agencies that enter into financial assistance awards with these types of entities must adhere to the requirements of the Bayh-Dole Act and its implementing regulation developed by the Department of Commerce.particular, the Bayh-Dole Act requires agencies, including DOE, to incorporate specific provisions into research and development financial assistance awards that contractors must comply with to help protect the government’s interests in any resulting inventions. Federal Nonnuclear Energy Research and Development Act of 1974 § 9, 42 U.S.C. § 5908. 3. apply for a patent on an invention, typically within 1 year of electing to retain ownership. DOE generally requires similar actions from large businesses in order to retain ownership of agency funded inventions, but the time frames can vary depending on the specific provisions of an individual financial assistance agreement. Compliance with these provisions informs agencies, such as DOE, of an invention’s existence, and helps ensure that a contractor takes timely steps to patent the intellectual property embodied in the invention, should the contractor wish to retain ownership of it. If a contractor does not comply with these requirements, agencies have the authority to demand ownership of a federally funded invention to provide the government the opportunity to take steps to protect its interests in it. Once contractors own a patented invention, they are generally free to use or transfer their patent rights at their discretion, in compliance with applicable laws, such as those regarding export control. The Bayh-Dole Act also identifies certain interests the government has in federally funded inventions, including their utilization and domestic manufacture. DOE’s Patent Waiver Regulation does not explicitly identify these interests with regard to inventions developed by large business contractors, but the waivers themselves generally will outline specific requirements related to invention utilization and domestic manufacture as shown below. Utilization. DOE can use the authority, known as march-in authority, to require a contractor or licensee to grant a license to any responsible entity or entities when an agency determines that certain conditions identified in the act have been met. Additionally, the government has a nonexclusive royalty-free license to practice, and have practiced on its behalf, any invention created with federal funding. Domestic manufacture. Under the Bayh-Dole Act and DOE regulations and policy, DOE has established certain requirements in financial assistance awards for contractors to manufacture federally funded inventions in the United States. These requirements vary depending on the type of contractor involved in a financial assistance award. Domestic manufacture requirements for federally funded inventions include: U.S. Preference. U.S. Preference provisions generally require small business and nonprofit contractors’ exclusive licensees to substantially manufacture federally funded inventions domestically in order to use or sell the inventions in the United States. These requirements apply only to a contractor’s exclusive licensee; the contractor that developed the invention faces no limitations on manufacturing location. Federal agencies may use their march-in authority in instances where contractors’ licensees do not comply with U.S. Preference requirements. U.S. Competitiveness. U.S. Competitiveness provisions generally require that the manufacture of inventions developed by large businesses must occur substantially in the United States unless otherwise approved by DOE. DOE may require forfeiture of invention ownership or refund of its investment when contractors do not comply with these requirements if such penalties are included in the patent rights clause of the financial assistance award. U.S. Manufacturing Plan. U.S. Manufacturing Plan provisions generally require that contractors submit plans as part of a funding application specifying how they intend to domestically manufacture any potential inventions developed in the course of the financial assistance award. Where this provision is part of a financial assistance award, DOE may take ownership of the invention, require refund of its investment,reporting requirements in instances of noncompliance. In addition, DOE may include provisions within a financial assistance award that require reports from contractors to provide information on how Specifically, DOE programs can federally funded inventions are utilized. request that contractors provide periodic reports with information regarding patent status (e.g., filing date, application number and title, and patent number and issue date), and invention utilization (e.g., the status of development, date of first commercial sale or use, and gross royalties received). DOE may request these reports at its discretion but not more than annually. During fiscal years 2009 through 2013, DOE initiated nearly 6,000 financial assistance awards. DOE relies primarily on contractor self- reporting and financial assistance award closeout procedures to ensure that contractors disclose agency funded inventions. During fiscal years 2009 through 2013, DOE initiated a total of nearly 6,000 financial assistance awards worth at least $11 billion with contractors, according to data provided by DOE. During this period, according to DOE patent counsel, contractors reported approximately 5,800 inventions, elected to take ownership of about 2,800 inventions, and were issued more than 700 patents. 37 C.F.R. § 401.8. DOE’s large business patent waivers may contain similar reporting requirements. amount and number of financial assistance awards DOE established with different types of contractors for fiscal years 2009 through 2013. In fiscal year 2013, the most recent year data were available, DOE provided a majority of funds to universities and nonprofits (nearly $1 billion across 580 agreements), followed by large businesses (approximately $304 million across 79 agreements) and small businesses (approximately $290 million across 370 agreements). Contractors disclose agency funded inventions to DOE through a variety of reporting mechanisms including e-mail, regular mail, and Interagency Edison (iEdison)—which is an electronic reporting system that allows federal grantees and contractors to report federally funded inventions, patents, and utilization data.contractor discloses an invention developed with DOE funds, patent staff create a file for that invention. According to DOE procedures, patent counsel then monitor compliance with time frames related to the contractor’s determination of whether to retain ownership and pursue According to DOE procedures, when a patent protection. From this point forward, according to DOE procedures, whenever a contractor submits additional information about an invention, DOE patent counsel review the invention’s file to ensure compliance with all invention disclosure provisions. If DOE patent counsel determine that a contractor has not met the specified time frames for disclosing and electing ownership of an invention, they send the contractor a letter— known as a demand letter—demanding that the contractor give DOE ownership of the invention. DOE patent counsel told us that they recently upgraded one of the agency’s data management systems—IP Master—to a new system called IP Manager, which became operational in November 2014. noncompliance with invention disclosure requirements during closeout.They added that nondisclosure, particularly from contractors with more limited resources, is generally inadvertent. They told us that nondisclosure is more often due to limited contractor familiarity with the legal requirements or experience in working with the federal government than an intentional attempt to avoid disclosing an agency funded invention. DOE patent counsel said if a contractor failed to disclose an agency funded invention but did so inadvertently, they would generally work with the contractor to meet disclosure requirements rather than demand ownership for noncompliance with disclosure requirements. They said that approach better meets their view of the intent of the Bayh-Dole Act to commercialize federally funded inventions, while ensuring the government protects its interests in them. The closeout of a financial assistance award is also the point when DOE monitors whether contractors have patented agency funded inventions. For example, according to DOE procedures, a contractor must send DOE patent counsel a patent certification form—a document that discloses all inventions that resulted from the financial assistance award. In turn, according to DOE procedures, patent counsel use this information to ensure contractor compliance with the terms of the financial assistance award before issuing a patent clearance letter—a document in which DOE acknowledges the contractor’s ownership of the patented invention. In addition to the required certification form, upon request from DOE, contractors must provide information about patents for any invention they elected to own. When a contractor submits such information, DOE patent counsel verify that the reported patents include a statement of government interest—standard language in a patent that acknowledges the government’s role in funding its development—and that the invention’s file is complete and up-to-date in the relevant data system. DOE faces challenges in ensuring that contractors disclose inventions they develop with agency funding and is taking actions to address them. Specifically, one challenge DOE faces is not having a documented process for ensuring that contractors disclose agency funded inventions after financial assistance awards end, and the agency has initiated two pilot projects to identify the extent of potentially undisclosed inventions. Additionally, DOE faces a challenge in managing invention disclosure information because its data systems for doing so have limited capabilities. While the agency has begun to upgrade those systems, DOE has not developed an implementation plan with specific milestones for certain key steps to guide these efforts. One challenge that DOE faces is not having a documented process for ensuring that contractors disclose agency funded inventions after financial assistance awards end. DOE’s closeout procedures are designed to, among other things, ensure that contractors disclose all inventions developed during a DOE financial assistance award and that DOE’s interests in them are documented. However, DOE has no documented process to monitor inventions after award closeout. Due to the time that may elapse between when a contractor files a patent application and when a patent is granted by the U.S. Patent and Trademark Office, DOE funded inventions may exist that are not identifiable through public searches at the time a financial assistance award ends. Under such a scenario, if the contractor did not voluntarily disclose an invention, the invention would not be identifiable during DOE’s closeout procedures. Instead, DOE patent counsel told us they rely on contractors to voluntarily disclose such information following financial assistance award closeout. As a result, the potential exists for DOE to be unaware of inventions that it funded and in which it would retain interests. To address this challenge, DOE recently launched two pilot projects aimed at better understanding the extent of undisclosed inventions. One pilot project is an audit of a sample of previously completed financial assistance awards to determine the extent to which contractors did not disclose DOE funded inventions. The second pilot project involves cross- referencing U.S. Patent and Trademark Office data against DOE information on inventions it funded. In the first pilot project, which began in December 2013, DOE developed draft audit procedures to sample previously completed financial assistance awards and determine the extent to which contractors did not disclose agency funded inventions. Under the draft audit procedures, DOE patent counsel (1) audit contractor activities for any indication of patents, patent applications, or inventions that might have been developed with DOE funding but were not disclosed to DOE; (2) submit demand letters to contractors to obtain ownership of any potentially undisclosed inventions; and (3) work with contractors, depending on the circumstances of nondisclosure, to allow them to retain ownership of the inventions. The draft audit procedures set a target of annually sampling 100 randomly selected financial assistance awards completed during the previous 5 years, which represents approximately 5 percent of the agreements closed out during that period, according to DOE documentation. As of December 2014, DOE patent counsel told us that they had reviewed 99 financial assistance awards completed during the previous 5 years and identified three undisclosed inventions for which they sent demand letters to the relevant contractors. DOE patent counsel told us that they granted one of the contractors an extension of time to retain ownership of the undisclosed invention because DOE determined that the nondisclosure was an oversight. DOE patent counsel said that they resolved another as a data entry error. As of December 2014, DOE had not received a response to the third demand letter. DOE modified its draft audit procedures based on initial testing, and patent counsel told us that they might conduct another phase of pilot testing before assessing the overall results of the audit procedures and making a determination about whether to implement the pilot project on a permanent basis. They told us that this decision will depend on the extent to which the pilot project identifies undisclosed inventions compared with the resources— principally staff hours—necessary to conduct it. For example, if the audit procedures identify few undisclosed inventions, DOE could be less likely to implement it permanently. DOE’s second pilot project involves cross-referencing U.S. Patent and Trademark Office data against DOE information on inventions it funded. For this pilot project, DOE patent counsel analyzed data from a U.S. Patent and Trademark Office database to identify inventions or patents with a government interest clause indicating the patent stemmed from DOE funding. Then, DOE patent counsel reviewed the data to determine if information on issued patents or published patent applications also existed in DOE’s PATMIS data system, with patent files not identified in DOE’s data system representing possible unreported inventions or patents. For the pilot, DOE patent counsel told us that they reviewed 549 patented inventions that they identified as absent from DOE’s data system. Through this effort, DOE patent counsel told us they identified 100 patented inventions that may have been undisclosed and issued 40 demand letters covering 64 undisclosed inventions to contractors. As of December 2014, they said DOE resolved 16 demand letters—covering 22 inventions—and is awaiting response from the other 24—covering the remaining 42 inventions. According to DOE patent counsel, 11 of the contractors had not properly disclosed the patented inventions, generally due to misunderstandings regarding what their responsibilities were to the government. They told us that, in each instance, DOE determined that the contractor had acted in good faith and allowed an extension of time for the contractors to file the required disclosure paperwork to retain ownership. According to DOE patent counsel, the 5 other contractors to which DOE issued demand letters demonstrated that they had not failed to comply with invention disclosure requirements. DOE patent counsel explained that the related inventions were either disclosed to DOE but not logged into its data systems, or were logged incorrectly into its data systems, among other reasons. DOE patent counsel said that this effort helped DOE identify several inventions that were not properly disclosed. DOE patent counsel also indicated that the pilot revealed that the contractors’ failure to report inventions to DOE appeared to have been in good faith, but that the contractors were unaware of all invention reporting obligations. DOE patent counsel told us that, in turn, the agency is reviewing the compliance assistance resources available to contractors and periodically reminds them of their obligations to DOE. They also said that, after these compliance assistance actions have been implemented, DOE anticipates reevaluating the usefulness of the pilot audit project. DOE faces a challenge in managing information on disclosed inventions, which underpins its ability to protect its interests in agency funded inventions. This is because, according to DOE patent counsel, the department’s two older invention data management systems—IP Master and PATMIS—are outdated, unable to communicate with each other, and do not have functionality for electronically updating invention disclosure or patent status, hampering their ability to manage information and data related to inventions developed with DOE funding. For example, DOE patent counsel told us that IP Master will soon be obsolete because, among other reasons, the vendor is discontinuing support for it. Additionally, DOE patent counsel told us that the existence of two separate systems can lead to duplicative entries and make it difficult to track invention disclosures on a department-wide basis. Further, DOE patent counsel explained that the systems do not have the functionality for electronic reporting. That means contractors submit information—such as invention disclosure reports—in mail, faxes, and e-mails, and DOE patent counsel must then manually enter that information in DOE’s systems. The DOE patent counsel described this as a time-consuming, labor-intensive process that can increase the likelihood of data errors. DOE patent counsel stated that they recognized the need to transition to a unified data tracking and monitoring system with functionality for electronic reporting and have begun efforts to do so. Specifically, DOE patent counsel told us that they recently upgraded one of the agency’s data management systems—IP Master—to a new system called IP Manager, which became operational in November 2014. DOE patent counsel said they intend to migrate the agency’s other system— PATMIS—to the new IP Manager system in the near future so there will be only one data tracking and monitoring system. However, prior to any such migration of PATMIS, DOE officials told us that they want to ensure that there are no technical problems with the new IP Manager system and evaluate its performance in case modifications are necessary. DOE patent counsel told us that integrating DOE’s existing systems into the IP Manager system will provide a consolidated, DOE-wide invention management database that will significantly reduce administrative costs associated with invention management and improve data quality by reducing duplicative entry of inventions. Also, according to documentation DOE provided about the IP Manager system, some of its features may help address DOE’s current data management challenges including automated generation of legal forms to reduce the need to draft unique documentation for every invention. Additionally, DOE plans to develop a “one click invention reporting” capability that would be designed to, among other things, give contractors the ability to electronically report and update invention records directly from their own databases and provide DOE with enhanced reporting functions to monitor contractor actions, including invention disclosure. DOE patent counsel explained that this would enhance DOE’s ability to share invention information across the agency to better enable it to manage information to track contractor compliance with financial assistance award provisions. DOE patent counsel told us the agency has created a workgroup composed of representatives from federal agencies and DOE laboratories to determine the requirements for this new “one click invention reporting” capability and tentatively plans to begin software development in the spring of 2015, with an initial pilot capability scheduled for fall of 2015 and full deployment in early 2016, subject to available funding. While DOE transitioned its IP Master system to IP Manager, and has developed an implementation plan with milestones for the additional capability it wants to add to the IP Manager system, DOE patent counsel said the agency does not have an implementation plan with milestones for when PATMIS would be migrated to IP Manager. They said that PATMIS contains the majority of the agency’s invention records and estimated that, given the amount of data contained in the system, it could cost as much as $100,000 to move the data from PATMIS into IP Manager. According to DOE patent counsel, as of November 2014, DOE had not prioritized funding to migrate the data and consolidate the databases. DOE also has not established milestones for its efforts to (1) identify requirements for the “one click invention reporting” capability it plans to add to IP Manager or (2) evaluate any system modifications necessary as a result of migrating IP Master to IP Manager and associated system requirements. Under federal standards for internal control, information should be recorded and communicated to management and others within the entity who need it and in a form and within a time frame that enables them to carry out their internal control and other responsibilities. In addition, best practices for information technology system acquisition emphasize developing requirements, among other practices, to guide software engineering investments. By planning to transition to a unified data tracking and monitoring system with functionality for electronic reporting, DOE is taking a step in the right direction. Also, the implementation plan with milestones DOE developed for the additional capability it wants to add to the IP Manager system will help it track and communicate progress toward completing this effort. By developing a comprehensive implementation plan with milestones that cover all aspects of the steps DOE is taking to transition to a unified data tracking and monitoring system with functionality for electronic reporting, DOE would have greater assurance that it will be able to track progress toward completing all of these steps in a timely manner. DOE faces challenges in monitoring and influencing contractor utilization and domestic manufacture of agency funded inventions to protect its interests in them, and the agency has proposed regulatory changes to address these challenges. Specifically, DOE does not have a standard for invention utilization and manufacture reporting, has a limited ability to compel domestic manufacture of agency funded inventions, and has a limited ability to influence changes in control of contractors receiving DOE funds. In turn, DOE’s proposed regulatory changes address, with respect to for-profit contractors, the frequency and duration of invention utilization and manufacturing reporting, use of U.S. Manufacturing Plans, and DOE influence over changes in contractor control.indicated that the agency anticipates issuing a final rule to implement these changes in fiscal year 2015. DOE faces a challenge in understanding how inventions it funds are utilized and manufactured because it does not have a standard for collecting such information from contractors. Currently, DOE can, at its discretion, request such information, but DOE regulations do not specify the duration of this reporting. DOE patent counsel told us that there are no established procedures regarding the frequency and duration of requests for information on invention utilization and manufacturing of agency funded inventions, which could hamper DOE’s efforts to protect its interests. To address this challenge, DOE’s proposed regulatory change would require annual contractor reporting on the utilization and manufacture of any products that use a DOE funded invention. This reporting would include information on the status of technology development, date of first commercial sale or use, gross royalties received, and manufacturing locations of those products, and it would be required for at least 10 years following invention disclosure. DOE patent counsel said that requiring this reporting for all DOE financial assistance awards to for-profit contractors would enhance the agency’s ability to protect its interests by providing it additional information with which to monitor compliance with financial assistance award provisions. DOE faces a challenge in protecting the agency’s interest in the domestic manufacture of agency funded inventions. DOE patent counsel stated that the agency’s ability to compel domestic manufacture of agency funded inventions is limited because the agency can only require substantial domestic manufacture from certain contractors under certain circumstances. In particular, they said that the process for determining an “exceptional circumstance”—which DOE can use to broaden the domestic manufacture requirements beyond those included in award provisions— can be difficult. Specifically, they noted that it is not a routine process and often requires substantial internal agency analysis, as well as coordination with other agencies, including the Department of Commerce. Additionally, any broadened requirements apply only to the specific programs or activities covered by the determination. For example, in 2011, DOE determined that an exceptional circumstance existed regarding DOE’s SunShot Initiative. Specifically, DOE proposed requiring a certain level of domestic manufacture for all inventions developed under the SunShot Initiative by all contractors, as well as their licensees.following a review to identify international trade and other issues, However, this proposed requirement was not implemented according to DOE patent counsel. DOE patent counsel also said that the limitations of U.S. Preference provisions—which apply to small businesses, academia, and nonprofits—constitute a challenge to protecting the agency’s interest in domestic manufacture. Specifically, such provisions require the exclusive licensees of contractors, but not the contractors themselves, to substantially manufacture agency funded inventions domestically. To address this limitation, DOE’s proposed regulatory change would allow programs to require that potential for-profit contractors submit a U.S. Manufacturing Plan as part of any funding proposal. In turn, DOE would be able to use these plans as criteria in assessing funding proposals and making financial assistance award decisions. Under the proposed regulatory change, and consistent with applicable law, DOE could make such plans binding for any licensee or entity that subsequently acquired ownership of an invention or technology developed under a DOE financial assistance award. DOE patent counsel explained that requiring and enforcing U.S. Manufacturing Plans increases the agency’s ability to influence the domestic manufacture of inventions it funded. DOE faces a challenge in influencing the change in control—such as ownership—of contractors engaged in DOE financial assistance awards. A change in control may affect a contractor’s ability to carry out the project that DOE is funding and, according to DOE patent counsel, the agency currently does not require contractors to notify it of such changes. DOE’s proposed regulation would require contractors to notify the agency of changes in their control in certain circumstances.regulation would establish procedures for the change of control, including ownership, of a contractor that received a DOE funding award of more The proposed than $10 million. The affected contractor would be required to notify the agency within 30 days of a change of control, or within 30 days if the contractor had a reason to know that such a change is likely. Failure to notify DOE would be grounds for suspension or termination of the financial assistance award. Further, without DOE authorization of the change of control, the award funding could cease to continue.such penalties would only apply to the current award. DOE is taking actions to update its data systems and move to a consolidated, DOE-wide invention management database to improve its ability to monitor and manage information regarding contractor disclosure of agency funded inventions. Currently, its two data systems are outdated, unable to communicate with each other, and do not have functionality for electronically updating invention disclosure or patent status, hampering DOE’s ability to manage information and data on agency funded inventions. DOE recently upgraded one of its data systems—IP Master—to a new system called IP Manager, which became operational in November 2014 and has developed an implementation plan with milestones for the additional capability it wants to add to the IP Manager system. However, DOE has not developed an implementation plan with appropriate milestones to help assess its progress toward completing key steps, such as moving the data in its primary PATMIS database—which contains the majority of the agency’s invention records—into the new IP Manager system, or for defining the requirements for planned or potential upgrades to that system. Without such a plan, DOE may not have assurance that it is making timely progress toward obtaining the information management capabilities it needs to protect its interests in agency funded inventions by monitoring information regarding contractor disclosure of these inventions. To help provide greater assurance that DOE is making timely progress toward obtaining the information management capabilities it needs to protect its interests by monitoring contractor disclosure of agency funded inventions, we recommend that the Secretary of Energy develop an implementation plan, including appropriate milestones, to guide DOE’s efforts to improve its data management capabilities. We provided a draft of this report to the Department of Energy and Department of Commerce for review and comment. In its written comments, the Department of Energy agreed with our findings and recommendation. The Department indicated that it would develop an implementation plan to guide data management improvements. The Department of Commerce neither agreed nor disagreed with our findings but noted that it provided background information on intellectual property issues related to the review. The Department of Energy’s and Department of Commerce’s written comments are reproduced in appendixes I and II, respectively. Both agencies also provided technical comments that we incorporated, as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of Energy, the Secretary of Commerce, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or neumannj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. In addition to the individual named above, Christopher Murray, Assistant Director; Richard P. Johnson; Gerald B. Leverich; Matthew D. Tabbert; and Michelle R. Wong made significant contributions to this report. Cheryl Arvidson and Kiki Theodoropoulos provided technical assistance. | DOE provides funding to contractors for research and development of new technologies. To incentivize participation in federal research projects and promote the use of federally funded inventions, the 1980 Bayh-Dole Act and other laws and regulations allow contractors receiving federal research and development funds to retain ownership of inventions they create so long as they adhere to certain requirements, including disclosing inventions developed with agency funding. DOE's ability to protect its interests in these inventions—including their utilization and domestic manufacture—depends on its knowledge of their existence. GAO was asked to review DOE efforts to protect its interests in agency funded inventions. This report examines: (1) DOE funding for contractor research for fiscal years 2009 through 2013 and how DOE ensures that contractors disclose agency funded inventions, (2) the challenges DOE faces in ensuring invention disclosure and actions it is taking to address them, and (3) the challenges DOE faces in protecting its interests in these inventions and the actions it is taking to address them. GAO reviewed laws, regulations, and other documents and interviewed DOE patent counsel responsible for intellectual property issues, representatives of organizations that facilitate the development of federally funded technology, and others. The U.S. Department of Energy (DOE) provided at least a total of $11 billion ($12 billion in fiscal year 2014 dollars) in research and development funding to contractors for fiscal years 2009 through 2013. Contractors reported about 5,800 inventions and 700 patents developed with DOE funding during this time period. To ensure disclosure of these agency funded inventions, DOE relies primarily on contractor self-reporting and financial assistance award closeout procedures. Contractors are generally required to adhere to specific time frames for invention disclosure. Following contractor invention disclosure, DOE patent counsel monitor the invention through the end of a financial assistance award to ensure contractor compliance with time frame requirements for electing to retain ownership and applying for patent protection of the invention. DOE faces challenges in (1) ensuring that contractors disclose agency funded inventions and (2) managing information related to these disclosures and is taking steps to address them. Limited ability to ensure invention disclosure after funding ends: DOE does not have a documented process to ensure contractors disclose inventions after financial assistance awards end. To address this, DOE recently began two pilot efforts to determine the extent of undisclosed inventions. One is an audit of a sample of previously completed financial assistance awards and the other involves cross-referencing U.S. Patent and Trademark Office data against DOE information on inventions it funded. DOE is still implementing these efforts but reported identifying more than 100 potential undisclosed inventions. DOE will assess the results of the pilots to determine whether to continue them, according to DOE patent counsel. Data management limitations: DOE faces a challenge in managing information related to agency funded inventions because it relies on two different data systems that are outdated, unable to communicate with each other, and do not allow for electronic reporting. Under federal internal control standards, information should be recorded and communicated to management and others within the entity who need it and in a form and within a time frame that enables them to carry out their responsibilities. DOE is in the process of updating its data systems and is planning the development of an electronic reporting function but has not established an implementation plan with milestones against which it can track its progress toward completing these efforts. By developing such a plan, DOE would have greater assurance that it is making timely progress toward these efforts. In addition, DOE faces challenges in its ability to monitor and influence the utilization and domestic manufacture of inventions it funded to protect its interests in them. DOE has proposed regulatory changes to address these challenges that would (1) require contractors to report on the utilization and domestic manufacture of agency funded inventions, (2) allow DOE to assess manufacturing plans as criteria for funding decisions, and (3) require contractors to obtain DOE authorization for changes in their control—including ownership—under certain circumstances. According to patent counsel, DOE expects to finalize these regulatory changes in fiscal year 2015. GAO recommends that DOE develop an implementation plan with milestones for improving its data management systems. DOE agreed with this recommendation. |
PBGC plays a critical role in protecting the pension benefits of private sector workers—it is responsible for administering current or future pension benefit payments to just over 1.3 million plan participants. Its budget operations flow through two accounts, one that appears in the federal budget and one that does not. PBGC’s budget can be confusing, especially in the short-term, as apparent federal budget gains may be offset by long-term liabilities that are not reported to on-budget accounts. PBGC plays a critical role in protecting the pension benefits of private sector workers. PBGC administers current or future pension benefit payments to a growing number of plan participants, from just under one- half million in fiscal year 2000 to 1.3 million in fiscal year 2007. Figure 1 shows the breakdown of recipients of benefit payments from PBGC’s single-insurance program. PBGC benefits are insured up to certain limits—up to $51,750 per year (about $4300 per month) for participants aged 65, with lower benefits for younger participants. While the actual annual benefits paid to participants are not adjusted for inflation, the initial maximum levels are set by law and are indexed for inflation. Covered benefits include pension benefits accrued at normal retirement age, most early retirement benefits, and survivor and disability benefits. PBGC pays these benefits when a plan is terminated and the plan has insufficient assets to pay all benefits accrued under the plan up to the date of plan termination. In 2006, PBGC paid over 622,000 people a median benefit of about $296 per month. The vast majority of the participants in PBGC-trusteed plans receive all the benefits they were promised by their plan. Benefits for some participants may be reduced if 1) their benefits exceed PBGC’s maximum guarantee limit, 2) a benefit increase occurred (or became payable due to a plant shutdown) within five years of the plan’s termination, or 3) a part of their benefit is a supplemental benefit. In addition to paying guaranteed benefits, PBGC may pay certain non-guaranteed benefits in limited circumstances involving asset recoveries from employers. PBGC receives no funds from general tax revenues. Operations are financed by insurance premiums set by Congress and paid by sponsors of defined benefit plans, investment income of assets from pension plans trusteed by PBGC ($4.8 billion in investment income from $68.4 billion in assets for its combined programs in 2007) and recoveries from the companies formerly responsible for the plans. Under current law, other than statutory authority to borrow up to $100 million from the Treasury Department (sometimes referred to as a $100 million line of credit), no substantial source of funds is available to PBGC if it runs out of money. In the event that PBGC were to exhaust all of its holdings, benefit payments would have to be drastically cut unless Congress were to take positive action to provide support. In 2007, PBGC received over $1.5 billion in premium income. An insured plan in the single employer program was required to pay PBGC a yearly premium of $31 per participant for pension benefit insurance coverage in 2007. This per-participant premium rate is adjusted annually to wage inflation. Plans that are underfunded as specified by ERISA must pay PBGC an additional premium of $9 per $1,000 of underfunding. Some terminating plans have to pay an “exit” premium of $1,250 per participant per year for three years if they undergo a distress or PBGC-initiated plan termination on or after January 1, 2006. PBGC also insures multiemployer defined benefit pensions through its multiemployer program. Multiemployer plans are established through collective bargaining agreements involving two or more unrelated employers and are common in industries such as construction, trucking, mining, the hotel trades, and segments of the grocery business. The multiemployer program is far smaller than the PBGC single employer program, insuring about 10 million participants in about 1,530 plans. Like the single employer program, PBGC collects premiums from sponsoring employers and insures multiemployer participant benefits up to a limit, although both the level of premiums and the maximum insured limit are far lower than those for the single employer program. Further, unlike the single-employer program, a multiemployer plan termination does not trigger the PBGC benefit guarantee. A terminated multiemployer plan continues to pay full plan benefits so long as it has sufficient assets to do so. The treatment of PBGC in the federal budget is complicated by the use of two accounts—an on–budget revolving fund and a non-budgetary trust fund. Some activities flow through the federal budget and other activities are outside the federal budget. Not only is PBGC’s budget structure complex, it can also result in confusing signals about the financial health of PBGC and create unintended policy incentives. PBGC’s receipts and disbursements are required by law to be included in the federal budget. These cash flows are reported in the budget in a single revolving fund account. The cash flows include premiums paid, interest income on federal securities, benefit payments, administrative expenses, and reimbursements from PBGC’s non-budgetary trust fund. The non- budgetary trust fund includes assets obtained from terminated plans and is managed by private money-managers. Because the trust fund is a non- budgetary account, the transfer of assets from terminated plans to PBGC is not considered a receipt to the government. Likewise, the liabilities PBGC incurs when it takes over an underfunded plan or other changes in PBGC’s assets and liabilities are not reflected in the budget. Figure 2 provides an overview of PBGC’s budgetary and non-budgetary cash flows. When an insured pension plan is terminated, assets are transferred to PBGC’s non-budgetary trust fund. Neither these assets nor the benefit liabilities appear on the federal balance sheet and PBGC’s net loss is not recorded in the federal government’s income statement. Assets in the non- budgetary trust fund are commingled and no longer identified with particular plans. PBGC has broad authority to oversee and administer pension assets held in its trust fund and is free to invest and expend the funds as if it were a private fiduciary of the trust fund’s holdings. PBGC can invest the assets in whatever way it chooses, as long as it acts in the best financial interest of beneficiaries. In addition to the non-budgetary trust fund, PBGC has an on-budget revolving fund. Premium income and transfers from the trust fund for both benefit payments and administrative expenses are deposited to the revolving funds as offsetting collections (that is, offsets to outlays). Unlike the trust fund, the revolving fund appears on the federal government’s balance sheet and provides PBGC with permanent spending authority to carry out its activities. In years that premium income and trust fund reimbursements exceed benefit payment and administrative costs, the revolving fund would show negative outlays, thus improving the overall fiscal balance of the federal government. Any funds that are not used to pay benefits or expenses are considered unobligated balances and are available for expenditure in the next year. By law, unobligated funds in the revolving fund must be held in Treasury securities and earn interest income. PBGC transfers funds from the non-budgetary trust fund to its on-budget revolving fund to pay a portion of retirement annuities and certain administrative costs. Such transfers are referred to as reimbursements and are recorded as offsetting collections in the budget. Generally, the proportion of benefit payments that is reimbursed from the trust fund depends on the aggregate funding level of the plans that PBGC has taken over and is adjusted periodically. In other words, if the average funding ratio of all plans taken over by PBGC is 50 percent, then half of all benefit payments originate from the non-budgetary trust fund. In addition to financing benefits, trust fund assets are also transferred to the revolving fund to pay for PBGC’s administrative expenses related to terminations. PBGC’s other administrative expenses are paid directly from the revolving fund. Insurance programs with long-term commitments, such as PBGC, may distort the budget’s fiscal balance by looking like revenue generators in years that premium collections exceed benefit payments and administrative expenses because the programs’ long-term expected costs are not reported. For example, in 2004 when PBGC’s losses measured on an accrual basis ballooned and its deficit grew from $11 billion to $23 billion, PBGC’s cash flow reported in the budget was positive and reduced the federal government’s budget deficit. GAO has reported previously that the cash-based federal budget, which focuses on annual cash flows, does not adequately reflect the cost of pension and other insurance programs. Generally, cost is only recognized in the budget when claims are paid rather than when the commitment is made. Benefit payments of terminated plans assumed by PBGC may not be made for years, even decades, because plan participants generally are not eligible to receive pension benefits until they reach age 65. Once eligible, beneficiaries then receive benefit payments for the rest of their lives. As a result, there can be years in which PBGC’s current cash collections exceed current cash payments, regardless of the expected long-term cost to the government. PBGC’s single-employer program faces financial challenges both from past claims resulting from bankruptcies and plan termination, which have been concentrated in a few industrial sectors, and structural problems such as weak plan funding rules and a premium structure that does not fully reflect the various risks posed by plans. Because of these financial challenges, GAO designated the single-employer program as “high risk” in 2003, and it remains so today. PBGC has seen recent improvements to its net financial position and recent legislative changes have raised premiums, changed certain plan funding rules and limited PBGC guarantees. However, the legislation has only been recently implemented and it did not completely address a number of the risks that PBGC faces going forward. Further, PBGC has recently implemented a new investment policy which adds significant variability and risk to the assets it manages. PBGC’s net deficit for the single-employer program, which is currently $13.1 billion, reached a peak of $23.3 billion (or $23.5 billion for both insurance programs combined) in 2004 largely as a result of a number of realized and probable claims that occurred during that year. See figure 3 for the difference between PBGC assets and liabilities for both insurance programs from 1991 to 2007. GAO has generally focused its work on the single-employer pension insurance program with respect to PBGC’s overall financial challenges. This is because the single-employer program represents nearly all of the assets and liabilities held by PBGC. The assets and liabilities that PBGC accumulates from taking over, or “trusteeing,” plans has increased rapidly over the last 5 years or so. This is largely due to the termination, typically through bankruptcies, of a number of very large, underfunded plan sponsors. In fact, eight of the top 10 largest firms that have presented claims to PBGC did so from 2002 to 2005. (See table 1). These top 10 claims alone currently account for nearly two-thirds of all of PBGC’s claims and are concentrated among firms representing the steel and airline industries. Overall, these industries account for about three- quarters of PBGC’s total claims and total single-employer benefit payments in 2006. While the claims presented by the steel and airline industries were due in some part to restructuring and competitive pressures in those industries, it is important to recognize other economic and regulatory factors affected PBGC and DB plan sponsors as a whole. For example, when we reported on airline pension plan underfunding in late 2004 we noted that several problems contributed to the broad underfunding of DB plans. These problems included cyclical factors like the so called “perfect storm” of key economic conditions, in which declines in stock prices lowered the value of pension assets used to pay benefits, while at the same time a decline in interest rates inflated the value of pension liabilities. The combined “bottom line” result was that many plans were underfunded at the time and had insufficient resources to pay all of their future promised benefits. Figure 4 shows the underfunding of PBGC’s single-employer plans from 1991 to 2007. Underfunding among large plans (as reuired by ERISA section 4010) In 2003, GAO designated PBGC’s single-employer program as high-risk, or as a program that needs urgent Congressional attention and agency action. We specifically noted PBGC’s prior-year net deficit as well as the risk of the termination among large, underfunded pension plans, as reasons for the programs high-risk designation. As part of our monitoring of PBGC as a high-risk agency we have highlighted additional challenges faced by the single-employer program. Among these concerns were the serious weaknesses that existed with respect to plan funding rules and that PBGC’s premium structure and guarantees needed to be re-examined to better reflect the risk posed by various plans. Additionally the number of single-employer insured DB plans has been rapidly declining, and, among the plans still in operation, many have frozen benefits to some or all participants. Additionally the prevalence of plans that are closed to new participants seems to imply that PBGC is likely to see a decline in insured participants, especially as insured participants seem increasingly likely to be retired (as opposed to active or current) workers. There have been a number of developments with respect to PBGC’s situation since we issued our most recent high risk updates in 2005 and 2007. At least until fairly recently, key economic conditions have been generally favorable for DB plan sponsors and plan funding has generally improved. In addition, major pension legislation was enacted which addressed many of the concerns articulated in our previous reports and testimonies on PBGC’s financial condition. The Deficit Reduction Act of 2005 (DRA) was signed in to law on February 8, 2006 and included provisions to raise flat-rate premiums and created a new, temporary premium for certain terminated single-employer plans. Later that year the Pension Protection Act of 2006 (PPA) was signed into law and included a number of provisions aimed at improving plan funding and PBGC finances. However, PPA did not fully close plan funding gaps, did not adjust premiums in a way that fully reflected risk from financially distressed sponsors and provided special relief to plan sponsors in troubled industries, particularly those in the airline industries. PBGC’s net financial position improved from 2005 to 2006 because some very large plans that were previously classified as probable terminations were reclassified to a reasonably possible designation as a result of the relief granted to troubled industries. Since this provision has only been implemented for a few years, it is still too early to determine how much risk of new claims these reclassified plans still represent to PBGC. As many of the provisions in PPA are still phasing-in, we will continue to monitor the status of the single-employer program with respect to PPA and will be updating our high risk series in early 2009. GAO recently reported on a newly developing financial challenge facing PBGC due to the recent change to its investment policy. While the investment policy adopted in 2008 aims to reduce PBGC’s $14 billion deficit by investing in assets with a greater expected return, GAO found that the new allocation will likely carry more risk than acknowledged by PBGC’s analysis. According to PBGC the new allocation will be sufficiently diversified to mitigate the expected risks associated with the higher expected return. They also asserted that it should involve less risk than the previous policy. However, GAO’s assessment found that, although returns are indeed likely to grow with the new allocation, the risks are likely higher as well. Although it is important that the PBGC consider ways to optimize its portfolio, including higher return and diversification strategies, the agency faces unique challenges, such as PBGC’s need for access to cash in the short-term to pay benefits, which could further increase the risks it faces with any investment strategy that allocates significant portions of the portfolio to volatile or illiquid assets. Improvements are needed to PBGC’s governance structure, to oversight of the corporation, and to its strategic approach to program management. PBGC’s three member board of directors is limited in its ability to provide policy direction and oversight. According to corporate governance guidelines, the board of directors should be large enough to provide the necessary skill sets, but also small enough to promote cohesion, flexibility, and effective participation. PBGC may also be exposed to challenges as the board, its representatives, and the director will likely change with the upcoming presidential transition in January, limiting the board’s institutional knowledge of the corporation. In addition, Congressional oversight of PBGC in recent years has ranged from formal congressional hearings to the use of its support agencies, such as GAO, the Congressional Budget Office, and the Congressional Research Service. However, unlike some other government corporations, PBGC does not have certain reporting requirements for providing additional information to Congress. Finally, we found that PBGC lacks a strategic approach to its acquisition and human capital management needs. PBGC’s board has limited time and resources to provide policy direction and oversight and had not established comprehensive written procedures and mechanisms to monitor PBGC’s operations. PBGC’s three-member board, established by ERISA, includes the Secretary of Labor as the Chair of the Board and the Secretaries of Commerce and Treasury. We noted that the board members have designated officials and staff within their respective agencies to conduct much of the work on their behalf and relied mostly on PBGC’s management to inform these board members’ representatives of pending issues. PBGC’s board members have numerous other responsibilities in their roles as cabinet secretaries and have been unable to dedicate consistent and comprehensive attention to PBGC. Since PBGC’s inception, the board has met infrequently. In 2003, after several high-profile pension plan terminations, PBGC’s board began meeting twice a year (see figure 5). PBGC officials told us that it is a challenge to find a time when all three cabinet secretaries are able to meet, and in several instances the board members’ representatives officially met in their place. While the PBGC board is now meeting twice a year, very little time is spent on addressing strategic and operational issues. According to corporate governance guidelines, boards should meet regularly and focus principally on broader issues, such as corporate philosophy and mission, broad policy, strategic management, oversight and monitoring of management, and company performance against business plans. However, our review of the board’s recorded minutes found that although some meetings devoted a portion of time to certain strategic and operational issues, such as investment policy, the financial status of PBGC’s insurance programs, and outside audit reviews, the board meetings generally only lasted about an hour. The size and composition of PBGC’s board does not meet corporate governance guidelines. According to corporate governance guidelines published by The Conference Board, corporate boards should be structured so that the composition and skill set of a board is linked to the corporation’s particular challenges and strategic vision, and should include a mix of knowledge and expertise targeted to the needs of the corporation. We did not identify any other government corporations with boards as small as at PBGC. Government corporations’ boards averaged about 7 members, with one having as many as 15. In addition, PBGC may also be exposed to challenges as the board, board members’ representatives, and the director will likely change with the upcoming presidential transition in January 2009, limiting the board’s institutional knowledge of the corporation. The recent revision of PBGC’s investment policy provides an example of the need for a more active board. We found that PBGC board’s 2004 and 2006 investment policy was not fully implemented. While the board assigned responsibility to PBGC for reducing equity holdings to a range of 15 to 25 percent of total investment, by 2008 the policy goal had not been met. Although the PBGC director and staff kept the board apprised of investment performance and asset allocation, we found no indication that the board had approved the deviation from its established policy or expected PBGC to continue to meet policy objectives. We previously recommended that Congress consider expanding PBGC’s board of directors, to appoint additional members who possess knowledge and expertise useful to PBGC’s responsibilities and can provide needed attention. Further, dedicating staff that are independent of PBGC’s executive management and have relevant pension and financial expertise to solely support the board’s policy and oversight activities may be warranted. In response to our finding, PBGC contracted with a consulting firm to identify and review governance models and provide a background report to assist the board in its review of alternative corporate governance structures. The consulting firm’s final report describes the advantages and disadvantages of the corporate board structures and governance practices of other government corporations and select private sector companies, and concludes that there are several viable alternatives for PBGC’s governance structure and practices. Along with the board’s limited time and resources, we found that the board had not established comprehensive written procedures and mechanisms to monitor PBGC’s operations. There were no formal protocols concerning the Inspector General’s interactions with the board, and PBGC internal management were not required to routinely report all matters to the board. Even though PBGC used informal communication to inform the board, it could not be certain that it received high quality and timely information about all significant matters facing the corporation. As a result we recommended that PBGC’s board of directors establish formal guidelines that articulate the authorities of the board, Department of Labor, other board members and their respective representatives. In May 2008 PBGC revised its bylaws. As part of its bylaw revision, the board of directors more explicitly defined the role and responsibilities of the director and the corporation’s senior officer positions, and outlined the board’s responsibilities, which include approval of policy matters significantly affecting the pension insurance program or its stakeholders; approval of the corporation’s investment policy; and review of certain management and Inspector General reports. Since 2002, PBGC officials have testified 20 times before various congressional committees—mostly on broad issues related to the status of the private sector defined benefit pension policy and its effect on PBGC— and, in 2007, the Senate conducted confirmation hearings of PBGC’s director. PBGC must annually submit reports to Congress on its prior fiscal year’s financial and operational matters, which include information on PBGC’s financial statements, internal controls, and compliance with certain laws and regulations. In addition, through its support agencies— GAO, the Congressional Budget Office, and the Congressional Research Service—Congress has also provided oversight and reviewed PBGC. Specifically, Congress has asked GAO to conduct assessments of policy, management, and the financial condition of PBGC. For example, we conducted more than 10 reviews of PBGC over the past 5 years, including assessments related to PBGC’s 2005 corporate reorganization and weaknesses in its governance structure, human capital management, and contracting practices. Our work also raised concerns about PBGC’s financial condition and the state of the defined benefit industry. Some government corporations have additional reporting requirements for notifying Congress of significant actions. For example, the Millennium Challenge Corporation is required to formally notify the appropriate congressional committees 15 days prior to the allocation or transfer of funds related to the corporation’s activities. The Commodity Credit Corporation is subject to a similar requirement. These examples demonstrate how Congress has required additional reporting requirements for certain activities conducted by government corporations. PBGC generally has no requirements to formally notify Congress prior to taking any significant financial or operational actions. As reported in our recent work on PBGC contracting and human capital management, contracting plays a central role in helping PBGC achieve its mission and address unpredictable workloads. Three-quarters of PBGC’s budget was spent on contracts and nearly two-thirds of its personnel are contractors, as shown in figure 6. Since the mid-1980s, PBGC has had contracts covering a wide range of services, including the administration of terminated plans, payment of benefits, customer communication, legal assistance, document management, and information technology. From fiscal year 2000 through 2007, PBGC’s contract spending increased steadily along with its overall budget and workload, and its use of contract employees outpaced its hiring of federal employees. As PBGC workload grew due to the significant number of large pension plan terminations, PBGC relied on contractors to supplement its workforce, acknowledging that it has difficulty anticipating workloads due to unpredictable economic conditions. In 2000, we recommended that PBGC develop a strategic approach to contracting by conducting a comprehensive review of PBGC’s future human capital needs and using this review to better link contracting decisions to PBGC’s long-term strategic planning process. PBGC took some initial steps to implement our recommendation, and in August 2008, we reported that PBGC had recently renewed its efforts and drafted a strategic human capital plan. In addition to drafting a strategic human capital plan, PBGC recently issued its strategic plan; however this plan does not document how the acquisition function supports the agency’s missions and goals. Although contracting is essential to PBGC’s mission, we found that the Procurement Department is not included in corporate-level strategic planning. Further, PBGC’s draft strategic human capital plan acknowledges the need for contractor support, but does not provide detailed plans for how the contract support will be obtained. While PBGC’s workload can expand and contract depending on the state of plan terminations, planning documents do not include strategies for managing the fluctuations. Based on these findings, we recommended that PBGC revise its strategic plan to reflect the importance of contracting and to project its vision of future contract use, and ensure that PBGC’s procurement department is included in agency-wide strategic planning. PBGC also needs a more strategic approach for improving human capital management. While PBGC has made progress in its human capital management approach by taking steps to improve its human capital planning and practices—such as drafting a succession management plan— the corporation still lacks a formal, comprehensive human capital strategy, articulated in a formal human capital plan that includes human capital policies, programs, and practices. PBGC has initiatives for the management of human capital, such as ensuring employees have the skills and competencies needed to support its mission and establishing a performance-based culture within the corporation, and has made some progress toward these goals. However, PBGC has not routinely and systematically targeted and analyzed all necessary workforce data—such as attrition rates, occupational skill mix, and trends—to understand its current and future workforce needs. PBGC is generally able to hire staff in its key occupations—such as accountants, actuaries, and attorneys—and retain them at rates similar to those of the rest of the federal government. However, PBGC has had some difficulty hiring and retaining staff for specific occupations and positions, including executives and senior financial analysts. PBGC has made use of various human capital flexibilities in which the corporation has discretionary authority to provide direct compensation in certain circumstances to support its recruitment and retention efforts, such as recruitment and retention incentives, superior qualification pay-setting authority, and special pay rates for specific occupations. However, PBGC officials said that they had not recently explored additional flexibilities that required the approval of OPM and OMB to determine whether they would be applicable or appropriate for the corporation. PBGC clearly faces many challenges. The impact of PPA is still unclear, but in any case difficult decisions for the future still remain. While PBGC’s net financial position has improved along with economic conditions that until recently had been favorable to plan sponsors, we are concerned that such conditions are changing and could leave PBGC exposed to another spate of claims from sponsors of very large severely underfunded plans. The challenges PBGC faces are acutely illustrated by its recent changes to its asset investment policy. The aim of the change to the policy is to reduce the current deficit through greater returns, but, holding all else equal, the potential for greater returns comes with greater risk. This greater risk may or may not be warranted, but the uncertain results of the policy could have important implications for all PBGC stakeholders: plan sponsors, insured participants, insured beneficiaries, as well as the government and ultimately taxpayers. One thing that is certain: PBGC will continue to require prudent management and diligent oversight going forward. However, PBGC faces challenges with its board structure, which will only become more apparent in the coming months as the board, its representatives, and the corporation’s director will likely be entirely replaced by a new president. Without adequate information and preparation, this transition could limit not only the progress made by the current board, its representatives, and director, but may also hinder the corporation’s ability to insure and deliver retirement benefits to millions of Americans that rely on the corporation. As this transition highlights, an improved board structure is critical in helping PBGC manage the daunting, and in many ways fundamental, long- term financial challenges it faces, which is why we have recommended the Congress restructure the Board. Chairman Lewis, Congressman Ramstad, and Members of the Subcommittee, this concludes my prepared statement. I would be happy to respond to any questions you may have. For further questions about this statement, please contact Barbara D. Bovbjerg at (202) 512-7215. Individuals making key contributions to this statement include Blake Ainsworth, Charles Jeszeck, Jay McTigue, Charles Ford, Monika Gomez, Craig Winslow, and Susannah Compton. Pension Benefit Guaranty Corporation: Need for Improved Oversight Persists, GAO-08-1062. Washington, D.C.: September 10, 2008. Pension Benefit Guaranty Corporation: Some Steps Have Been Taken to Improve Contracting, but a More Strategic Approach Is Needed. GAO-08- 871. Washington, D.C.: August 18, 2008. PBGC Assets: Implementation of New Investment Policy Will Need Stronger Board Oversight. GAO-08-667. Washington, D.C.: July 17, 2008. Pension Benefit Guaranty Corporation: A More Strategic Approach Could Improve Human Capital Management. GAO-08-624. Washington, D.C.: June 12, 2008. High Risk Series: An Update. GAO-07-310. Washington, D.C.: January 2007. Pension Benefit Guaranty Corporation: Governance Structure Needs Improvements to Ensure Policy Direction and Oversight. GAO-07-808 Washington, D.C.: July 6, 2007. PBGC’s Legal Support: Improvement Needed to Eliminate Confusion and Ensure Provision of Consistent Advice. GAO-07-757R. Washington, D.C.: May 18, 2007. Private Pensions: Questions Concerning the Pension Benefit Guaranty Corporation’s Practices Regarding Single-Employer Probable Claims. GAO-05-991R. Washington, D.C.: September 9, 2005. Private Pensions: The Pension Benefit Guaranty Corporation and Long- Term Budgetary Challenges. GAO-05-772T. Washington, D.C.: June 9, 2005. Private Pensions: Recent Experiences of Large Defined Benefit Plans Illustrate Weaknesses in Funding Rules. GAO-05-294. Washington, D.C.: May 31, 2005. Pension Benefit Guaranty Corporation: Single-Employer Pension Insurance Program Faces Significant Long-Term Risks. GAO-04-90. Washington, D.C.: October 29, 2003. Pension Benefit Guaranty Corporation Single-Employer Insurance Program: Long-Term Vulnerabilities Warrant ‘High Risk’ Designation. GAO-03-1050SP. Washington, D.C.: July 23, 2003. Pension Benefit Guaranty Corporation: Statutory Limitation on Administrative Expenses Does Not Provide Meaningful Control. GAO-03- 301. Washington, D.C.: February 28, 2003. GAO Forum on Governance and Accountability: Challenges to Restore Public Confidence in U.S. Corporate Governance and Accountability Systems. GAO-03-419SP. Washington, D.C.: January 2003. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The Pension Benefit Guaranty Corporation (PBGC) insures the retirement future of nearly 44 million people in more than 30,000 private-sector defined benefit pension plans. In July 2003, GAO designated PBGC's single-employer pension insurance program--its largest insurance program--as "high risk," including it on GAO's list of major programs that need urgent attention and transformation. The program remains on the list today with a projected financial deficit of just over $13 billion, as of September 2007. Because Congress exercises oversight of PBGC, GAO was asked to testify today on 1) the critical role PBGC plays in protecting the pension benefits of workers and how PBGC is funded, 2) the financial challenges facing PBGC, and 3) the PBGC's governance, oversight and management challenges. To address these objectives, we are relying on our reports from the last several years that, as part of our designation of PBGC's single-employer program as high-risk, explored the financial and management challenges facing the agency. GAO has made a number of recommendations and matters for Congressional consideration in these past reports. PBGC generally agreed with these past recommendations and is implementing many of them. No new recommendations are being made as part of this testimony. PBGC administers the current or future pension benefits for a growing number of participants of plans that have been taken over by the agency--from 500,000 in fiscal year 2000 to 1.3 million participants in fiscal year 2007. PBGC is financed by insurance premiums set by Congress and paid by sponsors of defined benefit (DB) plans, investment income, assets from pension plans trusteed by PBGC, and recoveries from the companies formerly responsible for those trusteed plans; PBGC receives no funds from general revenues. The treatment of PBGC in the federal budget is complicated by the use of two accounts--an on-budget revolving fund and a non-budgetary trust fund. Ultimately this budget treatment can be confusing--especially in the short-term--as on-budget gains may be offset by long-term liabilities that are not reported to on-budget accounts. PBGC's single-employer program faces financial challenges from a history of weak plan funding rules that left it susceptible to claims from sponsors of large, severely underfunded pension plans. PBGC had seen recent improvements to its net financial position due to generally better economic conditions and from statutory changes that raised premiums and took measures designed to strengthen plan funding and PBGC guarantees. However, certain improvements have only just begun phasing-in and the changes did not completely address a number of the risks that PBGC faces going forward. Further, PBGC just began implementing a new investment policy that, while offering the potential for higher returns, also adds significant variability and risk to the assets it manages. Also, changing economic conditions could further expose PBGC to future claims. Improvements are needed to PBGC's governance structure and to its strategic approach to program management. PBGC's three member board of directors is limited in its ability to provide policy direction and oversight. PBGC may also be exposed to challenges as the board, its representatives, and the director will likely change with the upcoming presidential transition in January. In addition, PBGC lacks a strategic approach to its acquisition and human capital management needs. Three-quarters of PBGC's administrative budget is spent on contractors, yet PBGC's strategic planning generally does not recognize contracting as a major aspect of PBGC activities. |
The Food, Agriculture, Conservation, and Trade Act of 1990 (the 1990 Farm Bill) required, among other things, that the Secretary give priority to beginning farmers in purchasing inventory farmland—properties that have come into government ownership through voluntary conveyance or foreclosure. It also expressed the sense of Congress that USDA maintain statistics on, among other things, the number of loans made, insured, or guaranteed, and inventory farmland sold or leased to beginning farmers, and that USDA establish innovative programs of finance and assistance for land transfer between generations and the establishment of new farms. Currently, FSA’s limits on direct loans for farm ownership and operations are each set at $200,000, while the guaranteed farm ownership and operating loan amounts are each set at $899,000. FSA allocates money to the states for its loan programs on the basis of the number of farmers in each state, the value of farm assets, and net farm income. For this allocation, the loan volumes of previous years may be considered as well. The Agricultural Credit Improvement Act of 1992 required the Secretary of Agriculture to reserve a portion of its direct and guaranteed farm ownership and operating loan funds for beginning farmers and ranchers. It also authorized the establishment of the Down Payment Farm Ownership Loan Program, administered by FSA. This program allows a beginning farmer to purchase a farm or ranch of up to $250,000 in value. To participate in this loan program, an applicant must make a cash down payment of at least 10 percent of the purchase price. FSA may provide up to 40 percent of the purchase or appraisal price over 15 or fewer years at a fixed interest rate of 4 percent. The balance may be obtained from another lender, with FSA providing up to a 95 percent guarantee. In addition, in accordance with the act, FSA has entered into memorandums of understanding with 21 states to provide joint financing to beginning farmers. The 1992 act also directed that the Secretary establish an Advisory Committee on Beginning Farmers and Ranchers to advise the Secretary on methods of creating new farming and ranching opportunities, among other things. The advisory committee includes representatives from the farming, ranching, and banking industries; extension education; nonprofit agencies; and federal and state staff who work directly with beginning farmers. Since it was established in 1998, the committee has met eight times, submitting recommendations to the Secretary to improve and increase opportunities for beginning farmers in starting and maintaining viable farming operations. Recently, these proposals have ranged from recommendations to develop a pilot program for providing matched savings accounts for beginning farmers to encouraging those with expiring Conservation Reserve Program easements to transfer their land to beginning farmers. Previously implemented recommendations have led to the 2006 addition of beginning farmers to USDA’s small farms policy, which led to the establishment of the Small Farms and Beginning Farmers and Ranchers Council. The Farm Security and Rural Investment Act of 2002 (the 2002 Farm Bill) authorized higher payments in two key conservation programs geared toward working lands—EQIP and CSP. EQIP provides farmers with financial and technical assistance to address soil, air, water, and related natural resource concerns on eligible land, while CSP supports ongoing stewardship of farmland by providing payments to producers for maintaining and enhancing conservation efforts that benefit natural resources. For both programs, the 2002 Farm Bill authorized the Secretary to provide a higher cost-share for beginning farmers—up to 90 percent of the cost of implementing a conservation practice—compared to 75 percent for other producers. For EQIP, the act also authorized higher cost-share payments for limited resource producers. In 2006, on average, NRCS provided a cost-share rate of almost 80 percent for beginning farmers through EQIP, compared with an average of 59 percent for non-limited- resource, established farmers. For CSP, the 2006 sign-up reduced the cost- share rate for new practice payments to not more than 65 percent for limited resource and beginning farmers and to not more than 50 percent for other producers. Furthermore, the 2002 Farm Bill authorized the Secretary to create a pilot program to provide guarantees of loans made by private sellers of a farm or ranch to beginning farmers on a contract sale basis. It also authorized the Secretary to reserve at least 15 percent of funds in its interest rate reduction program—a program to subsidize the interest rate on a guaranteed operating loan—for beginning farmers. Finally, the 2002 Farm Bill also authorized a Beginning Farmer and Rancher Development Program to provide training, education, outreach, and technical assistance initiatives, but no funding has been allocated to this program. USDA’s lending and conservation assistance to beginning farmers has been substantial and is growing. From fiscal year 2000 through 2006, FSA increased its lending to beginning farmers from $716 million to $1.1 billion annually, for a total of more than $6 billion during the period. Also, from fiscal years 2004 through 2006 (the most recent years for which data are available), NRCS’s assistance to beginning farmers through two key conservation programs nearly doubled, from over $47 million to about $92 million. From fiscal years 2000 through 2006, FSA increased the value of its loans to beginning farmers from $716 million to $1.1 billion annually, for a total of more than $6 billion over the period. In addition, beginning farmers received an increasing share of FSA’s loan dollars, from a 20 percent share in fiscal year 2000 to 35 percent by fiscal year 2006—or 27 percent of the amount FSA loaned all farmers over this period. At the end of fiscal year 2006, FSA had 25,064 beginning farmer borrowers in its loan portfolio. Of these borrowers, 16,828 had obtained 28,022 direct loans as of October 4, 2006, and 8,236 had obtained 11,735 guaranteed loans as of September 30, 2006. FSA also provided interest assistance on 2,409 of the guaranteed operating loans it made to beginning farmers between fiscal year 2000 and 2006. Through these loans, it obligated approximately $358 million—12 percent of guaranteed operating loan dollars with interest assistance obligated to all farmers. Table 1 provides more detailed information about FSA’s direct and guaranteed loans to beginning farmers. Appendix II provides information on fiscal year 2006 loans to beginning farmers by state. Beginning farmers can also take advantage of FSA’s joint financing plans and Down Payment Farm Ownership Loan Program. FSA’s joint financing plans have been more popular than the down payment loan program, in part because they have longer loan terms and do not require a down payment. They allow a borrower to receive up to 50 percent of the amount financed through FSA at a reduced interest rate, with another lender providing 50 percent or more of the loan. Through joint financing arrangements, FSA has made 2,395 loans to beginning farmers that provided over $287 million in direct loan assistance between fiscal years 2000 and 2006. Through the Down Payment Farm Ownership Loan Program, FSA made 777 loans to beginning farmers, providing over $42 million in direct loan assistance over the same period. In addition to providing loans, FSA sells properties to beginning farmers from its inventory of farmland properties. From fiscal years 2000 through 2006, it sold 48 properties to beginning farmers, or 4 percent of the 1,136 sold to all farmers over this time period. This form of assistance has been used infrequently in recent years because FSA’s farm inventory has been declining. NRCS conservation financial assistance for beginning farmers through EQIP and CSP increased from over $47 million in fiscal year 2004 to about $92 million in fiscal year 2006. In total, NRCS approved about $233 million in financial assistance for beginning farmers through these two programs from fiscal years 2004 through 2006—about 9 percent of the amount for all farmers. Table 2 shows EQIP and CSP assistance for beginning farmers over this period. Appendix III provides information on EQIP financial assistance approved for beginning farmers in fiscal year 2006 by state. Programs administered by USDA’s Risk Management Agency (RMA) and Cooperative State Research, Education, and Extension Service (CSREES) have funded organizations assisting farmers with risk management and other challenges. For example, RMA administers several partnership programs—in conjunction with state departments of agriculture, universities, nonprofit agricultural organizations, and other public or private organizations—to deliver training and information on production, marketing, and financial risk management to farmers. Some proposals funded through these programs have addressed beginning farmer needs. Additionally, the Community Outreach and Assistance Partnership Program provides higher scores to applicants that partner with organizations that can meet the needs of beginning farmers and other underserved producers. In addition, the Cooperative State Research, Education, and Extension Service provides grants to universities, colleges, and nonprofit organizations to deliver outreach and assistance to socially disadvantaged farmers and ranchers, including farm, management, and marketing assistance. Table 3 describes some of the projects these two agencies have funded that have a focus on beginning farmers. In addition, the Agricultural Marketing Service has programs such as the Farmers Market Promotion Program to help farmers directly market their products, which may indirectly assist beginning farmers. This grant program targets funds to agricultural cooperatives; local governments; nonprofit, public health, and economic development corporations; regional farmers’ market authorities; and tribal governments to work toward expanding direct producer-to-consumer marketing opportunities. These include farmers’ markets, roadside stands, community-supported agriculture programs, and others. USDA has several efforts under way through multiple agencies that assist beginning farmers. However, it is unable to demonstrate the effectiveness of its assistance to this group because (1) it does not have a crosscutting, departmental strategic goal to guide its beginning farmer efforts and because (2) it has only recently begun to develop information on the characteristics of beginning farmers, which will supplement its existing research on the age of farmers and changes in the number of farms. Although many reasons exist for helping beginning farmers, USDA has not transformed these reasons into a crosscutting, departmental strategic goal that demonstrates the outcomes it expects its beginning farmer efforts to achieve. Without such a goal, USDA runs the risk that its several efforts are not mutually reinforcing or coordinated. Such a goal could address the reasons for beginning farmer assistance cited in Congress and by stakeholders and others. For example, relevant congressional committee reports cite the importance of encouraging young people to enter farming in order to address concerns about the nation’s aging farmer population. Stakeholders cite additional reasons for beginning farmer assistance, such as promoting social change by increasing the number of immigrant and minority farmers and changes to the structure of agriculture by increasing the number of small and middle-sized farms. In 2006, USDA incorporated beginning farmers into its small farms policy to better recognize the importance of assisting beginning farmers. The Small Farms and Beginning Farmers and Ranchers Policy is designed to provide a framework for maintaining the viability of small and beginning farmer operations. It highlights numerous priorities as shown in table 4— from supporting the special needs of beginning farmers to emphasizing socially desirable strategies for this group. It also calls for agencies and mission areas to reflect the small and beginning farmer policy in their strategic plans, performance plans, and other documents. However, the policy does not provide a management and accountability focus for USDA’s efforts. (See app. IV for a complete copy of USDA’s policy). Furthermore, USDA strategic planning documents contain a beginning farmer performance goal specific to the FSA loan programs, but they do not integrate USDA’s and its multiple agencies’ several efforts to assist beginning farmers. A crosscutting, departmental strategic beginning farmer goal could provide needed direction for USDA agencies and help ensure their efforts to assist beginning farmers work toward a common purpose and serve similar clients. For example, such a crosscutting goal could help address concerns about whether FSA’s loans and NRCS’s conservation assistance are directed toward similar groups of beginning farmers. FSA’s loan programs are geared toward beginning farmers with limited economic resources—those who cannot access credit from another source. However, NRCS’s definition of a beginning farmer does not contain any income limitations. Not only are these programs serving different groups of farmers, there are unintended consequences as well. According to an NRCS document, the agency’s higher cost-share rates for beginning farmers have the potential to attract wealthy, retired, and absentee landowners. For example, an NRCS official told us of a case where a beginning farmer receiving NRCS assistance reported having an income of about $1 million, and another said his state did not offer a higher EQIP cost-share rate for beginning farmers because of concerns that wealthy beginning farmers would benefit. Appendix V contains information about NRCS’s and FSA’s beginning farmer definitions. While USDA has not established a crosscutting, departmental strategic goal for beginning farmers, two USDA agencies—FSA and RMA—have each developed their own beginning farmer performance goals. These goals set targets for the volume of their beginning farmer activities—the number of farmers assisted and the dollars they receive—rather than outcomes. Specifically, FSA annually tracks the volume of its lending to a combined grouping of its borrowers—including beginning farmers and socially disadvantaged farmers (racial and ethnic minority farmers and women farmers). FSA measures its performance by the increase in lending to these combined groups. For example, as shown in table 5, FSA reported that in 2006, 39 percent of its loan funds were obligated to these groups. Starting in fiscal year 2006, FSA adopted a related performance goal that tracks increases in the number of beginning farmers, racial and ethnic minority farmers, and women farmers in its portfolio as a percentage of individuals in this category with at least $10,000 in sales. FSA reported having 42,495 beginning and socially disadvantaged borrowers in its portfolio in fiscal year 2006—15.5 percent of its estimate of the 273,349 beginning and socially disadvantaged farmers who have at least $10,000 in sales. In effect, FSA measures its volume in providing loans to these groups, rather than measuring progress toward achieving a particular beginning farmer outcome, such as improving the financial well-being of beginning farmers or ensuring they continue to farm after leaving the loan program. Goals related to outcomes could provide additional insight into program effectiveness and allow FSA to evaluate the extent to which its loan programs contribute to a crosscutting, departmental goal. In addition to its specific beginning farmer goals, FSA also has broad performance goals related to its loan program. For example, one goal addresses the frequency with which farmers graduate from FSA’s direct loan program to its guaranteed loan program. Other goals address the efficiency of FSA’s lending as shown in table 5. In addition to the information FSA tracks on the number and types of borrowers served, a 2005 University of Arkansas study provides insight into the effectiveness of USDA’s direct loan program for beginning farmers that could provide one basis for developing FSA performance goals that feed into a departmental, crosscutting goal. The study focused on borrowers, including beginning farmers, originating FSA direct loans between fiscal year 1994 and 1996. Among other things, the study found that beginning farmer borrowers had positive average annual change in their net worth, potentially indicating financial progress. It also found that about 82 percent of those who received beginning farmer direct farm ownership loans who had left the loan program by 2004 had graduated to another form of credit, such as FSA guaranteed loans or commercial loans, or no longer needed credit. The remainder left farming voluntarily (approximately 13 percent); involuntarily, such as due to financial stress (approximately 3 percent); or died (approximately 3 percent). Like FSA, RMA has performance goals related to beginning farmers and tracks actions, as table 6 shows. However, tracking actions provides limited performance information and does not indicate the level of improvement. Other agencies we spoke with—NRCS, the Agricultural Marketing Service, and the Cooperative State Research, Education, and Extension Service— do not have beginning farmer performance goals. NRCS officials told us their performance goals are driven by their natural resource and environmental goals and do not directly target beginning farmers. Cooperative State Research, Education, and Extension Service and Agricultural Marketing Service officials said they have not developed beginning farmer performance goals because their programs benefit farmers broadly, rather than providing targeted assistance to this group. Nevertheless, achieving a common goal of importance often requires collaborative efforts among agencies. In 2005, GAO reported that collaborative efforts require agencies to define and articulate the common purpose or outcome they are seeking to achieve. In addition, GAO reported that agencies’ collaborative efforts can be enhanced and sustained by, among other things, establishing mutually reinforcing or joint strategies; identifying and addressing needs by leveraging resources; establishing compatible policies and procedures; and developing mechanisms to monitor, evaluate, and report on results. USDA has recently begun to develop baseline information about beginning farmer characteristics, which should help the department evaluate and better target its beginning farmer efforts. Among other things, recently developed analysis of existing data shows that beginning farmers are younger than established farmers (about 7 in 10 beginning farmers are under 55 years of age), operate smaller farms, and are slightly more ethnically diverse and female than other farmers. Table 7 provides some recent data on beginning farmers that was developed by economists from USDA’s Economic Research Service (ERS). These data estimate there were 484,981 beginning farmers with less than 10 years of experience operating farms from which $1,000 or more of agricultural products were produced and sold or normally would have been sold during the year. ERS economists told us they are supplementing this work with additional analysis to provide insight into the characteristics of beginning farmers. This information will include the location of beginning farmers across the United States, the types of production they engage in, the size of their operations, their level of participation in government programs, as well as whether they rent or own land. They are also analyzing differences between beginning farmers actively engaged in farming and those who are “hobby” farmers. For example, ERS economists found that roughly one- third of beginning farms in 2005 had no agricultural output and were likely operated by individuals interested in a rural residential lifestyle. In addition to ERS’s efforts, FSA has recently begun to analyze the financial characteristics and types of production of beginning farmers with FSA loans, as table 8 illustrates. This information shows, for example, that most beginning farmers with FSA direct loans are involved in livestock, corn, or soybean production. This type of information should help FSA determine the extent to which the characteristics of its beginning farmer borrowers reflect those of beginning farmers as a whole. Furthermore, FSA officials we spoke with said that as additional data are entered into the agency’s new centralized system for monitoring borrowers, it will be possible to conduct long-term analyses about these borrowers, including beginning farmers. This information will be valuable for understanding how farming operations change as a result of FSA assistance, including whether they expand and survive. Finally, USDA’s analysis of beginning farmer characteristics supplements its work relating to changes in the age of farmers and the number of farms. In 2007, USDA economists reported that the number of older farmers is increasing and the number of young farmers is declining. Younger farmers enter the business at a very slow rate, a fact that tends to increase the average age of farmers as a whole. Agricultural census data show that the average of age of principal farm operators in 2002 was 55, an increase from 50 years of age in 1978. Nevertheless, the number of farms has been relatively stable in recent years according to USDA because of a near balance in the overall rate of farm entry and exit. Moreover, USDA maintains that changes in the age composition of the farm population and its overall size will not likely impair the nation’s food security, since increases in labor productivity have been rapid enough to maintain farm output. Over the past two decades, heightened focus on beginning farmers by Congress and the agricultural community has led to USDA programs and incentives that provide much financial assistance to this group. However, despite the billions of dollars provided to beginning farmers through loans and conservation assistance, USDA has not yet demonstrated the effectiveness of its assistance to beginning farmers by showing what its expenditures are accomplishing. Although there are many reasons for helping beginning farmers, USDA has not developed a crosscutting, departmental strategic goal for its beginning farmer efforts to describe its expected accomplishments. FSA provides information about the dollars it directs to beginning farmers, but this information does not provide adequate direction for the department’s efforts or speak to the outcomes of its beginning farmer assistance. Without a crosscutting, departmental strategic performance goal, USDA will be unable to determine the effectiveness of its current beginning farmer efforts and the need for changes in this assistance. Furthermore, the department’s recent work to develop information about the characteristics of beginning farmers should help it define the outcomes it wants to achieve and develop a related crosscutting, departmental strategic goal. Additional baseline data about beginning farmer characteristics that provide insight into who beginning farmers are, which ones USDA assists, and how beginning farmer operations in agriculture change over time should (1) help USDA track the changes within this group, (2) provide a basis for more in-depth analyses about the effects of existing programs on beginning farmers, and (3) help identify the need for new forms of assistance. Furthermore, continued analysis of how beginning farmer policies affect farm entry and the age of farmers could provide insight into program effectiveness. To better ensure USDA can provide Congress and the public with information on the effectiveness of assistance to beginning farmers, we are recommending that the Secretary of Agriculture develop a crosscutting, departmental strategic beginning farmer performance goal that identifies the desired outcomes of USDA’s beginning farmer assistance and that links to related agency goals. We also recommend that USDA track progress toward achieving these goals. We provided USDA with a draft of this report for review and comment. In a letter dated September 12, 2007, we received formal comments from the Secretary of Agriculture. These comments are reprinted in appendix VI. We also received oral technical comments, which we incorporated into the report, as appropriate. USDA stated that it generally agreed with our report and recommendations. In particular, USDA explained that it would be able to develop more focused performance measures once the 2007 Farm Bill is complete. However, USDA did not specifically state whether it would develop a crosscutting, departmental strategic goal as we recommended. In addition, USDA stated that its departmental and agency strategic plans, taken together, provide a comprehensive strategy to ensure that its programs to assist beginning farmers are achieving stated objectives and goals. We disagree, since the goals in USDA’s plans do not provide adequate direction and focus for the department’s multiple beginning farmer efforts. For example, the departmental goal in USDA’s Strategic Plan to “Enhance the Competitiveness and Sustainability of Rural and Farm Economies” is related to agricultural producers and rural communities broadly; it is not specific to beginning farmers. A performance goal within that plan to increase the percentage of loans made to beginning farmers, racial and ethnic minority farmers, and women farmers is not crosscutting in nature and relates only to FSA’s loan programs. Moreover, USDA’s comments do not indicate a full appreciation of the efforts needed to implement our recommendation. For example, USDA did not discuss the need for further analysis of (1) beginning farmer characteristics, (2) gaps in beginning farmer assistance, and (3) the effects of beginning farmer policies on farm entry and the age of farmers. Such analysis could help USDA define the outcomes it expects its beginning farmer assistance to achieve and develop a crosscutting, departmental strategic goal to measure success. Furthermore, USDA did not directly respond to our conclusion that it has not demonstrated what has been accomplished by the billions of dollars of assistance to beginning farmers. In light of the federal government’s large and growing structural deficits, GAO has stated that agencies must link resources and activities to results. While USDA has taken the first steps in tracking the numbers of farmers it assists, a crosscutting strategic goal can help ensure its programs are mutually reinforcing in their support of beginning farmers. USDA also stated that FSA has virtually no discretion in setting the definition of a qualified beginning farmer and rancher. However, we believe that if USDA determines that consistency between FSA’s and NRCS’s programmatic definitions would better ensure that beginning farmer dollars work toward a common purpose, it should consider what changes are needed and how best to effect those changes. If USDA finds the changes in definitions require legislative action to achieve consistency across programs or focus efforts on particular outcomes, it should provide its analysis to Congress for consideration. Finally, USDA provided examples of RMA partnership programs that provided higher scores to applicants partnering with organizations that help beginning farmers and other underserved producers. Although the partnership programs direct risk management assistance to a broad class of producers rather than specifically to beginning farmers, we clarified the language in our report to acknowledge how the application scoring process can benefit beginning farmers. Our report also identifies examples of projects designed to help beginning farmers and other underserved producers. As agreed with your staff, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to interested congressional committees and the Secretary of Agriculture. We will also make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report or need additional information, please contact me at (202) 512-3841 or shamesl@gao.gov. Contact points for our Offices of Congressional Relations and of Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VII. At the request of the Chairman of the Senate Committee on Agriculture, Nutrition, and Forestry, we examined U.S. Department of Agriculture (USDA) support to beginning farmers and ranchers. Our objectives were to (1) identify the key steps USDA has taken to help beginning farmers and (2) assess USDA’s actions to measure the effectiveness of these steps. To identify USDA’s key steps to assist beginning farmers, we reviewed documentation describing the purpose and extent of USDA assistance to this group. We focused on departmental efforts to assist beginning farmers, as well as efforts by individual agencies such as the Farm Service Agency (FSA) and Natural Resources Conservation Service (NRCS). We also reviewed legislation authorizing assistance to beginning farmers, such as the Food, Agriculture, Conservation, and Trade Act of 1990; the Agricultural Credit Improvement Act of 1992; and the Farm Security and Rural Investment Act of 2002. To refine our understanding of the amount of assistance provided through beginning farmer programs, we also spoke with FSA and NRCS officials who manage programs that assist beginning farmers. Specifically, we spoke with officials from FSA’s loan making and servicing divisions, as well as the agency’s Economic and Policy Analysis staff. We spoke with NRCS officials who administer the Environmental Quality Incentives Program (EQIP) and Conservation Security Program (CSP), as well as a representative from the Resource Conservation and Development and Rural Lands Division. In addition, to identify other programs that may assist beginning farmers either directly or indirectly, we spoke with officials representing the Cooperative State Research, Education, and Extension Service (CSREES); Risk Management Agency (RMA); and the Agricultural Marketing Service (AMS). We reviewed data these agencies provided about the level of assistance to beginning farmers, including the number of loans and conservation dollars approved. We also contacted small and beginning farmer coordinators and the Co-Executive Directors of the Small Farms and Beginning Farmers and Ranchers Council to discuss the strengths and limitations of departmental assistance to beginning farmers. To assess USDA’s actions to measure the effectiveness of steps taken to assist beginning farmers, we reviewed USDA’s and agency strategic plans and USDA’s Performance and Accountability Report. We also reviewed reports on farm entry and exit, the characteristics of beginning farmers, and the effectiveness of credit programs. In addition, we spoke with agency officials from FSA’s Farm Loan Program and an official from NRCS’s Strategic and Performance Planning Division. These officials described agency efforts taken to measure the effectiveness of USDA’s efforts to serve beginning farmers and data used to monitor program performance. Last, we spoke with Economic Research Service (ERS) and National Agricultural Statistics Service (NASS) officials about data available regarding the characteristics of beginning farmers and future directions for their research. To understand the challenges beginning farmers face, we spoke with representatives from the Advisory Committee on Beginning Farmers and Ranchers. The Advisory Committee was established by USDA in 1998 to provide advice to the Secretary of Agriculture about methods of creating new farming and ranching opportunities, among other things. Members interviewed included those representing academia, cooperative extension programs, state government, and advocacy groups. On the basis of discussions with members of the Advisory Committee, we identified and interviewed other stakeholders, also representing academia, cooperative extension programs, state government, and advocacy groups. These interviews provided insight into USDA assistance to beginning farmers and potential areas for change and included such groups as the American Farm Bureau Federation, California FarmLink, Cornell Cooperative Extension, Iowa State University’s Beginning Farmer Center, and the Montana Department of Agriculture, among others. We also reviewed relevant policy papers and spoke with representatives from such organizations as the Center for Rural Affairs and the Sustainable Agriculture Coalition to familiarize ourselves with their recommendations for beginning farmer policy changes. Our work was performed between September 2006 and August 2007 in accordance with generally accepted government auditing standards. FSA and NRCS have different beginning farmer definitions in place. While both definitions generally define a beginning farmer and rancher as one who has operated a farm or ranch for 10 years or less who will materially and substantially participate in its operation, only FSA’s definition considers an applicant’s available resources as part of its program eligibility requirements. FSA’s definition also establishes other requirements that relate to its loan programs. For example, beginning farmers must agree to participate in borrower training. Table 9 presents a comparison of both FSA and NRCS beginning farmer definitions. has not operated a farm or ranch or has operated a farm or ranch for not more than 10 years has not operated a farm or ranch, or who has operated a farm or ranch for not more than 10 consecutive years will materially and substantially will materially and substantially participate in the operation of the farm or ranch participate in the operation of the farm or ranch meets the loan eligibility requirements of the program to which he/she is applying agrees to participate in such loan assessment, borrower training, and financial management programs as the Secretary requires demonstrates insufficient resources to continue farming or ranching on a viable scale does not own a farm greater than 30 percent of the average size farm in the county (farm ownership loans only) In addition to the individual named above, Charles Adams, Assistant Director; Kevin Bray; Barbara El Osta; Paige Gilbreath; Lynn Musser; Carol Herrnstadt Shulman; and Tracy Williams made key contributions to this report. | U.S. Department of Agriculture (USDA) programs have long supported beginning farmers. USDA generally defines a beginning farmer or rancher as one who has operated a farm or ranch for 10 years or less--without regard for age--and who materially and substantially participates in its operation. USDA's Farm Service Agency (FSA) makes and guarantees loans for farmers who cannot obtain commercial credit, including beginning farmers. FSA also reserves funds for beginning farmers within its loan programs. USDA's Natural Resources Conservation Service (NRCS) provides higher conservation payments for beginning farmers through two of its conservation programs. GAO reviewed the key steps USDA has taken to help beginning farmers and assessed the department's actions to measure the effectiveness of these steps.. USDA's lending and conservation assistance to beginning farmers has been substantial and is growing. USDA supports beginning farmers primarily through its lending assistance. From fiscal years 2000 through 2006, FSA's lending to beginning farmers rose from $716 million to $1.1 billion annually--totaling more than $6 billion. In addition, from fiscal years 2004 through 2006, the most recent years for which data are available, NRCS's annual financial assistance for beginning farmers through two key conservation programs nearly doubled from over $47 million to nearly $92 million, for a total of $233 million. However, USDA cannot demonstrate the effectiveness of its support for beginning farmers, because it has not developed a crosscutting, departmental strategic goal for its beginning farmer efforts and has only recently begun to analyze the characteristics of this group. Specifically, USDA has not developed a crosscutting, departmental strategic beginning farmer goal that demonstrates the outcomes it expects its beginning farmer efforts to achieve. Such a goal might address, for example, promoting demographic change, such as by decreasing the average age of farmers or changes to the structure of agriculture, such as by increasing the number of small and middle-sized farms. USDA has incorporated beginning farmers into its existing policy for maintaining the viability of small farms. Although this provides added recognition of the need to assist beginning farmers, USDA's policy does not establish a crosscutting, departmental strategic goal that provides a management and accountability focus for the department's several efforts. Furthermore, USDA tracks the numbers of farmers it assists and the dollars they receive, rather than its progress toward achieving a particular beginning farmer outcome. Having a crosscutting, departmental strategic goal could provide better insight into the desired outcomes and impact of USDA's beginning farmer efforts. USDA is just beginning to develop data about the characteristics of beginning farmers to supplement its existing analyses about the age of farmers and changes in the number of farms. For example, one recent analysis shows that beginning farmers are younger than established farmers, operate smaller farms, and are slightly more ethnically diverse and female than other farmers. Another indicates that roughly one-third of beginning farms in 2005 had no agricultural output and were likely operated by individuals interested in a rural residential lifestyle. Continued analysis of such characteristics and trends could provide better insight into who beginning farmers are, which ones USDA assists, and how beginning farmer operations change over time. |
The primary mission of MEP is to give “hands-on” technical assistance to small- and medium-sized manufacturers trying to improve their operations through the use of appropriate technologies. MEP engage in a variety of activities to assist small- and medium-sized manufacturers, often in partnership with other business assistance providers such as Small Business Development Centers, community colleges, and federal laboratories. MEP offer a wide range of business services, including helping companies (1) solve individual manufacturing problems, (2) obtain training for their workers, (3) create marketing plans, and (4) upgrade their equipment and computers. MEP assistance focuses on small- and medium-sized manufacturers because research by the National Research Council and others has indicated that these companies lack the resources necessary to improve their manufacturing performance. MEP funding typically comes from a variety of sources, which may include federal and state government agencies, universities, private industry, and fees. Between fiscal years 1988 and 1994 Congress appropriated a total of $141.7 million (in 1994 dollars) to MEP through NIST. Fiscal year 1995 appropriations were $104 million. State or local agencies are to provide matching funds for NIST grants to individual MEP. A 1995 Battelle Memorial Institute report estimated that states collectively spent about $57.7 million specifically on MEP in fiscal year 1994. That same fiscal year, federal MEP spending was $66 million. We were not able to determine the amount of MEP funding from other sources of support, including universities, private industry, and users’ fees. This report analyzes data from questionnaires we sent to 766 manufacturers that had completed at least 40 hours of MEP assistance in 1993 in one or more of four service categories. We obtained the names of these manufacturers from the directors of 57 MEP in 34 states. A total of 551 manufacturers (72 percent) completed and returned the questionnaire. We also interviewed eight manufacturers who had received MEP services and were given tours of their manufacturing facilities in Maryland, Georgia, North Carolina, and South Carolina. Appendix II provides more details on our scope and methodology. In assessing the impact of MEP services on their companies’ overall business performance, 13 percent of survey respondents reported an extremely positive impact, 59 percent reported a generally positive impact, and 15 percent reported no impact. (Less than 1 percent of respondents (0.2 percent) said the assistance had had a negative impact.) We analyzed the likelihood of the companies reporting that the impact of MEP assistance on their overall business performance was extremely positive, somewhat positive, or not positive, depending on various company and program characteristics identified through the survey. We analyzed how the reported impact of MEP assistance related to the companies’ reported age, 1994 gross sales, and number of permanent employees. Also, we analyzed the reported impact of MEP assistance in relation to the companies’ activities associated with the assistance—whether they made financial investments, spent staff time, implemented recommendations, and paid for the assistance. In addition, we analyzed how the reported impact of MEP assistance varied according to whether programs received NIST funds. We used logistic regression techniques to determine which factors were statistically significant in predicting the reported impact of MEP assistance on companies’ overall business performance. The strength of these particular statistical techniques is that they allowed us to estimate the individual influence of each factor on the reported impact, both before and after the influences of all other relevant factors identified in the survey were controlled. Appendix III provides more detailed information on our methodology, the models tested, and the results obtained. We used simple frequency distributions to determine whether the companies’ expectations were met regarding the impact of MEP assistance on specific business performance indicators and to analyze whether MEP demonstrated the attributes most valued by the companies. Results from our work cannot be generalized to all companies that used MEP because our questionnaire covered only companies that had completed at least 40 hours of MEP assistance. In addition, our results do not apply to all MEP services because we limited our analysis to four MEP service categories. Since we did not evaluate the operations or management of specific federal programs, we did not obtain agency comments on this report. However, on February 12, 1996, we discussed a draft of this report with NIST officials, including the Director of the NIST Manufacturing Extension Partnership Program. He agreed with the technical accuracy of the report and offered minor clarifications, which we incorporated into the report where appropriate. We did our work primarily in Los Angeles, New York, San Francisco, and Washington, D.C., from February 1995 to January 1996 in accordance with generally accepted government auditing standards. We analyzed several factors related to company and program funding characteristics to determine whether they influenced the companies’ assessment of the impact of MEP assistance on their overall business performance. We found that several company characteristics—relating to company level of involvement with MEP assistance, and company size and age—influenced the companies’ assessment of the impact of MEP assistance on their overall business performance. However, the program funding characteristic we examined—whether the program received NIST funds—did not influence the companies’ assessment of the impact of MEP assistance. We found that the level of companies’ involvement played an important role in determining the outcome of MEP assistance. The manufacturers that had made financial investments in their company as a result of MEP assistance were 2.5 times more likely than those that did not to report an “extremely positive” impact on their overall business performance, as opposed to a “generally positive” impact. They also were 5.6 times as likely to report a generally positive impact as opposed to a “neutral” or “negative” impact. Companies whose staff spent relatively more time in activities related to MEP assistance were 1.7 times more likely to report an extremely positive impact of MEP assistance on their overall business performance, as opposed to a generally positive impact. Furthermore, the relatively small companies, which research has indicated are most in need of modernization assistance, were most likely to report that their overall business performance was improved by MEP assistance. According to the National Research Council, small- and medium-sized manufacturers generally lack the expertise, time, money, and information necessary to improve their manufacturing performance. We found through our survey that the companies whose fiscal year 1994 gross sales were less than $1 million were 3.1 times more likely to assess the impact of MEP assistance as extremely positive, as opposed to generally positive. Likewise, the companies started since 1985 were 2.0 times as likely as the older companies to report an extremely positive effect of MEP assistance on their overall business performance, as opposed to a generally positive effect. Our visits to manufacturers provided examples of how MEP assistance benefited growing companies. A furniture manufacturer said his company needed MEP assistance to make fewer mistakes in the growth process. This manufacturer said he used MEP experts to help identify and correct environmental and worker safety hazards, so the facility would comply with federal workplace standards. At a company that makes molded plastics, the company president said that the company needed MEP assistance to guide its rapid growth. MEP helped this company with strategic management, planning, worker training, and quality improvement. Our survey revealed no significant differences in how the companies viewed the impact on their overall business performance of MEP that did and did not receive NIST funds. MEP funding typically comes from a variety of sources, which may include federal and state government agencies, universities, industries, and fees. The combination of funding sources varies across programs, but our analysis revealed no significant distinction in how the companies assessed the impact of MEP that did and did not receive NIST funds. Specifically, MEP that received NIST funds were equally as likely as other MEP to have their impact on business performance rated positively by the companies. In commenting on our analysis, NIST officials said that, given the manufacturers’ positive responses to our survey, they expected no difference in how the manufacturers viewed the impact of MEP that did and did not receive NIST funds. Moreover, they said that the function of NIST funding is to help MEP serve more clients, with a wider variety of services. Also, they said that they believed NIST support improves programs’ efficiency and effectiveness, which are dimensions of MEP that our survey did not address. As part of our analysis, we compared what the companies said they expected from MEP assistance to the results they reported. We found that most of the companies (between 61 and 77 percent) reported that MEP assistance met or exceeded their expectations for improvements to specific business performance indicators, such as manufacturing time frames, the quality of market research, and sales to new and repeat customers. However, between 23 and 39 percent of the companies reported that their expectations were not met for improvements to these indicators. Our survey results indicate that equipment modernization and plant layout assistance improved manufacturing time frames for most of the companies expecting these improvements (see fig. 1). In particular, the survey results indicate that equipment modernization and plant layout assistance met or exceeded the expectations of a substantial number of the companies for reducing cycle times—the times required by machines or work stations to fully complete their sequence of operations (77 percent)—and setup time—the time it takes to prepare equipment for changes to production (76 percent). In addition, the assistance met a large number of the companies’ expectations for improvements to worker output (76 percent). However, about 30 percent of companies we surveyed that received equipment modernization and plant layout assistance reported that they did not have their expectations met for reductions to manufacturing lead time, the time it took them to process an order, from start to finish, after design approval. Several companies commented on how MEP assistance affected their efforts to improve plant layout and modernize equipment. One manufacturer that we visited said the company was able to solve problems with congestion and redundant product movement on the plant floor after implementing MEP plant layout recommendations. The company was rewarded with faster production and lower costs. Another manufacturer responding to our survey commented that, by modernizing equipment and improving plant layout, the company was better able to meet its delivery schedules and, thus, satisfy its customers’ needs. Most of the companies that received product design and development assistance reported in our survey that they achieved anticipated improvements to quality (see fig. 2). In particular, large proportions of these companies reported fewer incomplete product development projects (77 percent) and improved quality of market research (71 percent). Most of the comments we received regarding product design and development assistance were positive. For example, one respondent commented that the assistance it received made it possible for the company to develop a process that it could not have developed on its own. However, not all companies shared such views. One respondent wrote that it took too much management time to work with MEP consultants, and he felt that the company had educated the consultants, and not vice versa. Our survey also indicated that most companies’ expectations for reduced product design and development time frames were satisfied. Seventy percent of the companies reported they received anticipated reductions in the time needed to get new products to market. One survey respondent commented about the importance of MEP assistance in getting a new product to market, noting that the assistance helped the company to overcome equipment problems, which freed the company to market new machine technology. Despite positive assessments such as these, our survey results show that product design and development assistance met fewer of the companies’ expectations for reducing costs of product development (66 percent) and increasing access to new customers (65 percent), compared to other business performance indicators. Between 61 and 74 percent of the companies we surveyed that expected quality improvement assistance to bolster specific business performance indicators were satisfied with the results they received (see fig. 3). A substantial percentage of the companies had their expectations fulfilled regarding increased sales to repeat customers (74 percent) and new customers (69 percent). However, our results indicate that, for 39 percent of the companies, quality improvement assistance did not meet expectations for reducing rework and scrap levels. Customer satisfaction was an important goal of the companies seeking quality improvement assistance. Ninety-four percent of the companies we surveyed regarding quality improvement assistance said they sought the assistance in order to enhance their competitive position in the marketplace. In interviews, several manufacturers told us that they undertook quality improvement initiatives in order to retain and attract customers. They said that an increasing number of customers had high expectations for the quality of products. For example, at a foundry we visited, the company president said that many customers of foundry products were reducing the number of suppliers and were working on a closer, more long-term basis with the remaining suppliers. He said that this new customer-supplier relationship put more emphasis on quality than ever before and that it was extremely important to guarantee quality in order to retain customers. Another survey respondent said that the company was “forced” to comply with a quality assurance program by its customers, even though customers rejected virtually none of its products. Most of the companies that responded to our survey were satisfied with the service delivery features of the program they used. The companies ranked MEP staff expertise, timeliness, and affordability as the features most important to them. A majority of the companies (80 percent or more) also responded that they were satisfied that their program demonstrated each of the service delivery features they deemed important (see fig. 4). About 93 percent of the companies responding to our survey rated staff expertise as an important attribute for MEP in general, and 88 percent of respondents said they were satisfied with the expertise of the staff at the specific program they had used. In our visits to manufacturers, they cited several examples of how MEP staff expertise benefited their company. A manufacturer of heavy agricultural equipment said its three staff engineers were fully occupied solving day-to-day manufacturing problems, with no time to address the “big picture.” The company used MEP experts to support company efforts to develop innovations to keep the company moving forward. A manufacturer of souvenir and collectible items was considering investing in over $600,000 worth of advanced production equipment. The manufacturer told us that MEP located a consultant who had the expertise to provide the company with an independent opinion about whether the equipment under consideration was appropriate for the company’s needs. A hosiery mill had installed advanced knitting machines but continual machinery breakdowns had cut productivity by 70 percent. A senior company official told us that MEP brought experts in training, engineering, and human resources to help the company reverse this decline and benefit from the machinery upgrade. Most respondents looking for timely and affordable assistance said they found it through MEP. About 92 percent of the survey respondents rated timely assistance as an important MEP attribute, and 83 percent said they were satisfied with the timeliness of the assistance provided by the program they had used. Ninety-one percent of respondents rated reasonably priced MEP service fees and project proposals as important MEP attributes, and most were satisfied with the price of fees and proposals costs at their own program. Eighty percent of respondents who paid fees were satisfied that the fees were reasonable, and about 81 percent of respondents were satisfied that their program had project proposal costs within their financial means. Three hundred twenty-eight survey respondents (60 percent) paid a fee for MEP assistance. Of those, 58 percent said that the value added or worth of the assistance was worth more than what they paid for it, 27 percent said the assistance was worth about what they paid, and 11 percent said the assistance was worth less than the fee they had paid. As agreed with you, unless you announce the contents of this report earlier, we plan no further distribution until 14 days after the date of this letter. At that time, we will send copies to the Director of NIST, the Secretary of Commerce, and the Chairmen and Ranking Minority Members of congressional committees that have responsibilities related to these issues. Copies also will be made available to others upon request. The major contributors to this report are listed in appendix IV. Please contact me at (202) 512-8984 if you have any questions concerning this report. At the request of Chairwoman Constance A. Morella of the Subcommittee on Technology, House Committee on Science, we obtained manufacturers’ views regarding the impact of manufacturing extension programs’ (MEP) services on their business performance and the factors that affected the impact of MEP services. In August 1995, we reported that most manufacturers responding to our questionnaire believed MEP assistance had positively affected their overall business performance. Our objectives for this report were to analyze (1) the factors that may have contributed to the positive impact of MEP assistance on companies’ overall business performance; (2) the question of whether companies’ expectations were met regarding the impact of MEP assistance on specific business performance indicators, such as manufacturing time frames and labor productivity; and (3) the issue of whether MEP actually demonstrated attributes that companies indicated they valued most, such as MEP staff expertise, timely assistance, and reasonably priced fees. We did not verify either positive or negative impacts reported by manufacturers. To identify manufacturers that had used MEP services to survey regarding the services’ impact on their business performance and the factors that had affected the services’ impact, we (1) developed criteria for the type of MEP our study would include, (2) located all MEP that fit our criteria, and (3) asked these MEP for their cooperation in supplying names of clients that met our survey criteria (described in the following paragraphs). Since the term “MEP” could include a variety of programs and organizations, we consulted MEP literature and MEP experts to develop a set of criteria to use in identifying programs to include in our study. For the purpose of our study, we considered programs to be relevant if their primary function was to provide direct technical assistance to individual manufacturers, using program staff or supervised consultants. We defined “technical assistance” as one or more of the following activities: providing access to and encouraging the use of innovative and/or off-the-shelf manufacturing technologies and processes; disseminating scientific, engineering, technical, and management providing access to industry-related expertise and capability in university research departments; and transferring advanced manufacturing (i.e., cutting edge) technologies and techniques to companies. Our definition excluded business assistance programs such as the Small Business Administration’s Small Business Development Centers; business incubators; financial assistance, funding, and grant programs; joint research ventures with universities and/or federal laboratories; on-line technical data base services; and industry networks. We located 80 MEP that met our criteria for inclusion and had been established before 1994. We used reports from the National Governor’s Association, the Northeast-Midwest Institute, and the Battelle Memorial Institute in Ohio that contained references to existing MEP as the basis for identifying programs that would possibly fit our criteria. We confirmed and updated information in these reports by conducting structured telephone interviews with all programs that we believed matched our criteria. We interviewed officials from a total of 114 programs in 40 states. Eighty of them met our criteria for inclusion and had been established before January 1994. Fifty-seven of the 80 MEP that qualified for our study supplied us with the names of clients that met our survey criteria. Thirteen of these MEP received NIST funding for fiscal year 1994, accounting for 36 percent of survey respondents. In an effort to determine if the qualified programs that provided client information differed from the qualified programs that did not, we compared the two sets of programs on the basis of program age, total funding, federal funding, and type of administration. The results of the comparisons indicated that there were no significant differences between MEP that did and did not provide client data. We asked the 57 participating MEP to select from their records all manufacturers that met specific criteria that we developed in consultation with MEP officials and MEP evaluation experts. The client had to meet the following criteria: It had to be a manufacturing facility, which means that its products had to belong to one or more of the manufacturing categories in the Department of Commerce’s Standard Industrial Classification codes. Our survey excluded nonmanufacturing facilities, such as service providers or farmers. It had to have received at least 40 hours of MEP assistance in 1993. Thus, when the facility received our survey in early 1995, at least 1 year would have elapsed since the MEP assistance ended. MEP evaluation experts have told us that 1 year would have been sufficient time for facilities to be able to gauge the value of the assistance and its impact on their business performance. Experts also have told us that 40 hours would have been enough assistance to have had a potential effect on a manufacturer’s business performance. It had to have completed assistance in one or more of the four categories defined in the following paragraph. In cases in which a manufacturer completed more than one type of assistance, we asked the MEP official to choose the primary assistance provided to the manufacturer (i.e., the assistance requiring the most MEP time and/or resources). We did not verify the client information MEP provided against the programs’ records. The assistance categories we included in our survey involved the following: Quality improvement. Technical assistance in planning, developing, and implementing a quality system to help a manufacturer attain higher quality standards. Equipment modernization and plant layout. The evaluation and analysis of plant layout and equipment to determine the most efficient means of manufacturing or assembly through reorganization of the process flow through the facility, and/or upgrading, reconfiguring, or replacing manufacturing equipment. Product design and development. Services to support the creation, enhancement, or marketing of a manufacturer’s product. Environmental or energy assessment. Assessment of hazardous materials, discharge, waste products, energy use, and other environmental effects within a manufacturing operation. We chose these four assistance categories because they share important characteristics. They are types of assistance that MEP typically offer clients, so our survey potentially could include clients from most MEP. Also, the four types of assistance are defined in a similar way by most MEP, according to MEP officials. Other MEP services (such as worker training and strategic business planning) may vary considerably from one program to another. Finally, we selected types of assistance that were directed at clients’ manufacturing operations. MEP clients receiving operations-related assistance were able to tell us (1) how they expected the assistance would affect their operations and/or performance and (2) whether or not these expectations were met. Other types of MEP assistance—examples are material engineering, electronic data exchange, and computer upgrading—have effects on manufacturers’ operations that are less visible and less easily measured. As a result, manufacturers may have difficulty determining the expected and actual impact of these types of services on their business operations and performance. We designed four questionnaires, each focusing on one assistance category. In designing our survey questions, we obtained input from National Institute of Standards and Technology (NIST) and MEP officials, MEP evaluation experts, and managers at manufacturing facilities. We also reviewed client surveys that MEP used. Each questionnaire contained identical questions to obtain background information about the respondent and to get respondents’ views on the impact of MEP services on their business performance and the factors affecting the impact of MEP services. However, the four surveys also had unique questions asking about the expected and actual outcomes of the assistance, because each type of assistance focuses on a different aspect of manufacturers’ operations. We tailored these questions to ask about the kind of impacts that reasonably could be expected to result from the particular kind of assistance received. As part of our survey development, we tested all four surveys with manufacturers who had received MEP assistance in Texas, Iowa, New York, and Kansas. We chose those states in order to cover diverse areas of the country where MEP are located. We also interviewed eight manufacturers who had received MEP services and were given tours of their manufacturing facilities in Maryland, Georgia, North Carolina, and South Carolina. We visited these southern states because MEP directors had agreed to arrange for us to meet selected clients. We asked the manufacturers about their experiences with MEP services and the impact of those services on their business performance. Our final surveys initially were mailed to a total of 843 manufacturers from February 1995 through March 1995. Follow-up mailings were made through May 1995. Each manufacturer was sent one survey, based on MEP information on the primary type of service the manufacturer had received. The primary reason manufacturers did not respond to our survey was their inability to recall MEP assistance they had received. We wrote letters asking the nonrespondents why they did not return our survey. We received responses from 60 companies out of 274 nonrespondents. About 33 percent told us that no one at their facility could recall the assistance received in 1993 and/or that we had addressed the survey to a person who no longer worked at the facility. On the basis of this information, in addition to other information provided by our nonrespondents, we reduced our survey population from 843 to 766. We obtained an overall response rate of 72 percent across all four surveys. Response rates varied from a low of 63 percent for the environmental/energy survey to a high of 76 percent for the quality improvement survey. Our analysis of the companies that did and did not respond to our survey found nothing to indicate that our results would have been different if the nonrespondents had completed our questionnaire. The respondents and nonrespondents were similarly distributed across different geographic locations and different MEP. Since we did not evaluate the operations or management of specific federal programs, we did not obtain agency comments on this report. However, on February 12, 1996, we discussed a draft of this report with NIST officials, including the Director of the NIST Manufacturing Extension Partnership Program. He agreed with the technical accuracy of the report and offered minor clarifications, which we incorporated into the report where appropriate. We did our work primarily in Los Angeles, New York, San Francisco, and Washington, D.C., from February 1995 to January 1996 in accordance with generally accepted government auditing standards. We used logistic regression techniques to determine which factors were statistically significant in predicting the reported impact of MEP assistance on companies’ overall business performance. We began our analysis by considering nine factors that may have affected how the manufacturers we surveyed assessed the impact of MEP assistance on their overall business performance. The factors included the following characteristics of those manufacturers: (1) the number of permanent employees as of January 1, 1995, (2) the number of hours company staff devoted to MEP assistance, (3) the year the company started operating, (4) the company’s gross sales in fiscal year 1994, (5) whether the company paid any fees for MEP assistance, (6) whether the company made any financial investments as a result of the assistance, (7) whether the assistance included recommendations, and (8) the percentage of MEP recommendations the company implemented. We also considered whether the company used a program that had received NIST funds. These factors all are listed in the first column of table III.1. Odds ratios indicating the effects of the different factors on the odds of MEP being assessed as: Extremely Positive vs. Generally Positive vs. 0 = 100 or more; 1 = 20 - 99; 2 = less than 20 0 = less than 100; 1 = 100 - 250; 2 = more than 250 0 = before 1985; 1 = since 1985 0 = over $1 million; 1 = under $1 million 0 = no; 1 = yes 0 = no; 1 = yes 0 = no; 1 = yes 0 = few or none; 1 = some; 2 = all or almost all 0 = yes; 1 = no Number of permanent employees was dropped from the multivariate analysis because of its strong association with gross sales. Each of these two indicators of company size were significantly related to assessments when the other indicator was ignored. However, when we controlled for gross annual sales, the effect of number of permanent employees was not statistically significant. The percentage of recommendations implemented was dropped from the multivariate analysis because there were too few responses to perform the analysis. Only 70 percent of the companies received recommendations and provided information on the percentage of recommendations implemented. Some of these factors had many categories. We used loglinear methods to determine which of those categories differed with respect to companies’ assessment of the overall impact of MEP. We combined the categories that were not significantly different from one another. The categories which ultimately were contrasted with one another are given in the second column of table III.1. For the purpose of our analysis, the factors were used as the independent variables. We used simple bivariate logistic regression models to estimate the individual influence of each factor on the reported impact of MEP assistance, without controlling for the influence of all other relevant factors identified in the survey. We estimated which of the nine factors, as categorized in Table III.1, were related to (1) the odds on the overall impact of MEP being assessed as extremely positive versus generally positive and (2) the odds on the overall impact of MEP being assessed as generally positive versus negative or neutral. Our bivariate estimates are given as odds ratios in the third and fifth columns of table III.1. As can be seen in that table, seven of the nine factors had a significant relationship to the likelihood that companies assessed the impact of MEP assistance as extremely positive, as opposed to generally positive. In addition, four of the nine factors were significantly related to the odds of companies assessing the impact of MEP assistance as generally positive, as opposed to neutral or negative. The bivariate odds ratios have a straightforward interpretation. The odds ratio gives an estimate of how each factor, as categorized in column 2 of Table III.1, affected companies’ assessment of MEP assistance. For example, the companies with 20-99 employees were more than twice as likely as the companies with 100 or more employees to assess the impact of MEP as extremely positive as opposed to generally positive. Likewise, the companies with less than 20 employees were more than twice as likely as the companies with 20 to 99 employees to assess the impact of MEP as extremely positive, as opposed to generally positive. Similar interpretations can be given to the other odds ratios in the table. The bivariate odds ratios are estimates that do not take into account the effects of other variables. We also undertook multivariate analysis of the data. Multivariate analysis also estimated the individual effect of each factor on the reported impact of MEP assistance, but it controlled for the influence of all other relevant factors. It is necessary to control for the influence of multiple factors because some factors are associated with others, making it impossible to isolate their individual effect on the dependent variable. Our multivariate analysis did not include two factors used in the bivariate analysis: the number of permanent employees and the percentage of recommendations companies had implemented. The odds ratios in the fourth and sixth columns of table III.1 provide the results of multivariate analysis. Odds ratios that are marked by an asterisk represent statistically significant effects. Five factors had significant effects on the odds of whether programs were assessed extremely positively as opposed to generally positively: (1) the number of company staff hours devoted to the assistance, (2) when the company started operating, (3) the company’s 1994 fiscal year gross sales, (4) whether the company paid any fees for the assistance, and (5) whether the company made any financial investments as a result of the assistance. Only two factors had significant effects on whether assessments were generally positive as opposed to neutral or negative: (1) the number of company staff hours devoted to the assistance and (2) whether the company made any financial investments as a result of the assistance. Many of the significant effects from the multivariate analysis are quite sizable. For example, the companies that made financial investments were 2.5 times as likely as those that had not made financial investments to assess the impact of MEP assistance as extremely positive, as opposed to generally positive. The companies that made financial investments also were 5.6 times as likely as companies that had not made financial investments to assess the impact of MEP assistance as generally positive, as opposed to neutral or negative. Other odds ratios can be similarly interpreted. Our letter report features the results of the multivariate analysis. The multivariate estimates may differ from the bivariate estimates because the multivariate analysis controlled for the effects of all other factors when estimating the influence of one factor. Bivariate analysis estimates the influence of one factor without controlling for the effects of other factors. In general, the multivariate and bivariate estimates for each factor are similar, with two exceptions. The first exception is company staff hours devoted to MEP assistance. In the bivariate analysis, this factor was unrelated to whether companies assessed the impact of MEP assistance as extremely positive versus generally positive. However, multivariate results indicate that company staff hours were significantly related to companies’ assessment of the impact of assistance as extremely positive, as opposed to generally positive. We believe that the significance varies because of a relationship between company size and the number of company staff hours spent on MEP assistance. In particular, larger companies devoted more staff hours to the program. In order to accurately assess the independent influence of company staff hours, we needed to control for company size. Our multivariate model controls for company size by including the variable that measures gross sales. Therefore, the multivariate model provides a more accurate assessment of the impact of company staff hours, independent of company size. The second exception was the factor measuring whether MEP assistance included recommendations. In our bivariate analysis, this variable was significantly related to both extremely positive and generally positive assessments. However, its significance disappeared in our multivariate analysis. Companies receiving recommendations were more likely to devote more staff hours to the program and to make financial investments as a result of MEP assistance. Therefore, when the multivariate analysis controlled for company staff hours spent on the assistance and financial investments made as a result of the assistance, the effect of recommendations was rendered insignificant. Susan S. Westin, Assistant Director Douglas Sloane, Supervisory Social Science Analyst Stuart Kaufman, Senior Social Science Analyst Barry L. Reed, Senior Social Science Analyst Rona Mendelsohn, Senior Evaluator (Communications Analyst) Patrick F. Gormley, Assistant Director Amy L. Finkelstein, Evaluator-in-Charge Edward Laughlin, Senior Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO surveyed manufacturers' views regarding the impact of manufacturing extension programs (MEP), focusing on: (1) factors contributing to the positive impact of overall business performance reported by the majority of survey respondents; (2) whether companies' expectations were met regarding MEP impact on specific business performance indicators; and (3) whether companies thought that MEP actually demonstrated attributes they valued most. GAO found that: (1) the results of its survey could not be applied to all MEP participants or all MEP service categories; (2) companies that supplemented MEP assistance with their own resources, implemented more MEP recommendations, were small and relatively new, and did not pay fees for MEP assistance were more likely to view the MEP program positively; (3) the source of MEP funding did not influence companies' views of the assistance's impact; (4) National Institute of Standards and Technology (NIST) officials believed that NIST support improved MEP programs' efficiency and effectiveness and made MEP services more widely available; (5) about two-thirds to three-quarters of the companies that expected MEP assistance to enhance specific business indicators believed that the results met or exceeded their expectations; and (6) over 90 percent of the companies rated staff expertise, timely assistance, and reasonably priced MEP service fees and project proposals as important MEP features and most were satisfied with their specific MEP programs in these areas. |
Annually, IRS audits some tax returns to determine whether taxpayers complied with the tax laws. IRS attempts to select returns for audit that have an indication of potential noncompliance based on, for example, its formula for flagging suspicious returns. IRS believes that a credible threat of being audited deters some noncompliance. IRS audits check compliance in reporting income, deductions, and other return items as well as in paying the correct tax liability. To conduct these compliance checks, IRS auditors ask taxpayers for documentation about specific items on their returns. IRS conducts two types of audits, face-to-face and correspondence, using three classes of auditors—revenue agents, tax auditors, and tax examiners. Face-to-face audits can be either (1) field audits, in which an IRS revenue agent visits an individual who has business income or a complex return, or (2) office audits, in which an individual who has a less complex return visits a tax auditor at an IRS office. Correspondence audits, as the name suggests, are done by tax examiners who correspond with taxpayers through the mail. Correspondence audits usually involve one line item on a return. Because correspondence audits involve fewer and usually simpler tax return items, they are less likely to burden taxpayers in terms of time, contacts with IRS, and documentation provided to IRS. IRS also checks compliance and contacts individual taxpayers through nonaudit enforcement programs. For example, IRS’ math error program checks returns for math and consistency errors and contacts taxpayers if such errors are found. IRS’ underreporter program matches the income reported on tax returns with the information returns (e.g., W-2 forms) filed by third parties, such as employers who pay wage income. If discrepancies are found, then taxpayers are mailed a notice. Although such contacts can be similar to correspondence audit contacts, IRS does not define them as audit contacts. Over the years, IRS has shifted contacts between the audit and nonaudit categories. For example, in fiscal 1997, IRS shifted over 700,000 cases involving missing or invalid social security numbers on tax returns from the correspondence audit program to the math error program. Changes in the definition of an audit could contribute to decreases or increases in the audit rate. To describe changes in audit rates for individuals (as opposed to partnership or corporate taxpayers), we used IRS’ method for computing audit rates. For all taxpayers and by taxpayer categories, the audit rate equals the proportion of IRS audits closed in a fiscal year compared to returns filed in the previous calendar year. We used data from IRS’ Databook, Audit Information Management System, and Statistics of Income about individual income tax returns filed in calendar years 1995 through 1999 and audits of the returns that closed in fiscal years 1996 through 2000. This allowed us to describe the changes and update the audit rate trends in earlier reports. We also described audit rates by various categories. One category was the income reported on individual income tax returns, which IRS divides into broad groups. Under IRS’ grouping, lower income individuals report income under $25,000 and higher income individuals report $100,000 or more of income on their tax returns. Other categories included the types of IRS audit, IRS office locations, and the major income sources. Nonbusiness sources include individuals who generated most of their income from sources such as wages, dividends, and interest. Business sources include individuals who generated most of their income from self- employment and reported that income on a schedule C (nonfarm income) or schedule F (farm income). For comparisons of lower and higher income by source of income, we excluded schedule F income because IRS’ data only split schedule F income into groups under and over $100,000. We did include Schedule F taxpayers in the overall audit rate. We interviewed officials from IRS’ Examination Division and the Brookhaven Service Center to discuss IRS’ reasons for changes in the audit rates from fiscal years 1996 through 2000. We also obtained available IRS data related to the reasons given by IRS officials. For example, we obtained IRS data on changes in the number of auditors and number of hours spent doing audits. We checked for inconsistencies between the raw data and the reasons that IRS officials gave us. However, due to time constraints, we did not do any more detailed analyses to determine the extent to which IRS’ reasons explained the changes in audit rates. Nor did we attempt to identify reasons beyond those offered by IRS. To describe what is known about the potential effects of changes in the audit rates on tax compliance, we used our previous and ongoing work on IRS audits, other IRS enforcement programs, and tax compliance. We also used information from our discussions with IRS officials. We did our work at IRS’ national office in Washington, D.C., between September 2000 and March 2001 in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the Commissioner of Internal Revenue. We received written comments from the Commissioner in a letter dated April 19, 2001. The comments are reprinted in appendix IV and discussed at the end of this letter. From fiscal years 1996 through 2000, the overall income tax audit rate of individuals declined. As table 1 shows, IRS’ annual audit rates for individuals declined from 1.67 percent to 0.49 percent—about 70 percent. The table also shows that the audit rates fell for all major sources of income—nonbusiness as well as schedule C and schedule F business returns—over the 5 years. Table 1 also shows that the audit rate patterns for each year changed little from fiscal years 1996 to 2000. Schedule C business returns were more than twice as likely to be audited than the nonbusiness returns in each year. Table 2 shows that audit rates declined about equally—67 percent and 70 percent, respectively—for lower and higher income individuals. When taxpayers are separated into nonbusiness and business income groups, audit rates declined at least 42 percent from fiscal years 1996 to 2000 for lower and higher income individuals in the two groups. Table 2 also shows that higher income individuals were more likely to be audited than lower income individuals in each of the 5 years. However, exceptions to this pattern emerged when these audit rates by income level were analyzed by source of income. First, in the nonbusiness group, IRS was more likely to audit lower income individuals only in fiscal year 1999. Second, in the business group (schedule C), the rates fluctuated by income levels. IRS was more likely to audit lower income individuals in fiscal year 1996, higher income individuals in fiscal years 1997 and 1998, and lower income individuals in fiscal years 1999 and 2000. Most audits of lower income individuals were correspondence audits, with the proportion of audits of lower income individuals that were correspondence audits ranging from 69 to 84 percent over the 5 years. Audits of higher income individuals were mostly face-to-face audits, ranging from 62 to 75 percent over the 5 years. (See table 13 in app. I for details.) Correspondence and face-to-face audit rates also varied by taxpayer income. For example, in fiscal year 2000, the face-to-face audit rate (face- to-face audits divided by all returns filed) for higher income individuals was 0.60 percent compared with 0.13 percent for lower income individuals. For correspondence audits in fiscal year 2000, the audit rate for higher income individuals was 0.37 percent and for lower income individuals was 0.50 percent. (See table 14 in app. I for details.) Table 3 shows that both types of face-to-face audits (field and office) and correspondence audits declined by similar rates from 1996 to 2000. Table 3 also shows that correspondence/tax examiner audits accounted for over half of all audits in each year (ranging from 54 percent to 67 percent) and that the number of audits declined each year for all types of audits/auditors except for correspondence audits in fiscal year 1999. The declines in audit rates were spread uniformly across IRS’ four regions. However, audit rates varied by region. The audit rates declined about 50 percent in each of IRS’ four regional offices from fiscal years 1996 to 1999.For each of these 4 years, the range of audit rates was highest in the Western Region (1.09 to 0.47 percent) compared to the Northeast Region (0.44 to 0.23 percent), the Southeast Region (0.58 to 0.28), and the Midstates Region (0.62 to 0.32 percent). (See tables 18, 19, and 20 in app. III.) According to IRS officials, overall audit rates declined for fiscal years 1996 to 2000 for three main reasons. First, IRS had fewer auditors for individual returns for reasons that include a decline in staff and decisions to change staffing priorities to focus on customer service. Second, IRS was more likely to use the remaining auditors in other duties, such as assisting taxpayers. Third, audits took longer due to additional requirements, such as more written communications with taxpayers about the status of their audit. With respect to changes in the audit rate by income levels, IRS officials cited an increase in the number of high-income tax returns and an audit focus on noncompliance by earned income credit claimants, who are usually lower income individuals. IRS’ raw data were generally consistent with all these reasons. However, due to time constraints, we did not analyze the data to determine the extent to which IRS’ reasons explained the changes in audit rates. According to IRS officials, IRS did fewer audits between fiscal years 1996 and 2000, in part because it had fewer auditors. IRS officials explained that auditor staff levels declined for two reasons. First, tight budgets in the 1990s reduced overall staffing levels. Second, IRS put more staff in positions to serve taxpayers and generally has not hired revenue agents or tax auditors since 1995. As shown in table 4, the number of revenue agent and tax auditor positions assigned to audit individual income tax returns declined steadily since 1996. By fiscal year 2000, the number of these positions declined about 54 percent for revenue agents and about 61 percent for tax auditors. This represents a loss of over 2,000 staff years for audit staff devoted to field and office audits. On the other hand, tax examiner positions, which do the simpler correspondence audits, increased 13 percent, or 200 positions, between fiscal years 1997 and 2000 (data for fiscal year 1996 were not available). IRS officials also said that its auditors spent less time auditing in fiscal years 1996 through 2000. Our analysis of IRS’ data, as shown in table 5, indicates that for individual income tax returns, the average amount of direct audit time—actual time doing audit work—has declined in comparison to time spent on nondirect audit activities. Nondirect audit activities include taxpayer assistance, other details, and training. Part of the reason for the decline in auditing is that revenue agents and tax auditors spent increasingly more time providing taxpayer assistance between fiscal years 1996 and 2000. The amount of time spent on taxpayer assistance by revenue agents increased from about 1.0 percent of available staff years in 1996 to about 4.4 percent of available staff years in 2000. The amount of time spent on taxpayer assistance by tax auditors increased from about 1.4 percent of available staff years in 1996 to about 12.3 percent of available staff years in 2000. IRS did not have comparable data for assistance provided by tax examiners who had been slated to do audits. In addition, revenue agents and tax auditors had less time to audit because of increased time in training. (See table 17 in app. II for additional information on revenue agent and tax auditor training.) Considering the 54-percent decrease in the number of revenue agents, the training time per revenue agent increased about 227 percent. The training time per tax auditor over the same 5 years increased about 95 percent. IRS did not have comparable training data on tax examiners. Finally, IRS officials said that auditors generally took longer to finish audits during fiscal years 1996 to 2000. Our analysis of IRS’ data for this period (see table 16 in app. II) showed that the average time to finish an audit increased for all types of auditors, including about 37 percent (20.2 hours to 27.6 hours) for revenue agents (field audits), 56 percent (4.6 hours to 7.1 hours) for tax auditors (office audits), and 153 percent (0.7 hours to 1.8 hours) for tax examiners (correspondence audits). IRS officials told us that Internal Revenue Service Restructuring and Reform Act of 1998 requirements increased audit time. Among other things, these requirements resulted in IRS auditors having to send more notices to taxpayers and third parties that provide information about the taxpayer being audited. New requirements to explain innocent spouse provisions and to protect taxpayers under audit have generated more review work. These officials said the act has created many new tasks during audits. Other factors, such as the experience level of the auditor and complexity of the audit, also affected audit time per return. For example, IRS officials said that they lost many experienced auditors to higher graded positions elsewhere in IRS. Because multiple factors affect audit time per return, determining the contribution of each factor to changes in audit time could be difficult. Because of time constraints, we did not attempt such an analysis. IRS officials offered two reasons why the audit rates for lower income individuals exceeded the rates for higher income individuals in selected years among the nonbusiness and business groups. First, as table 6 shows, the number of higher income returns filed in calendar years 1995 through 1999 that were subject to audits in fiscal years 1996 to 2000 significantly increased compared with the number of lower income returns filed. For nonbusiness returns, the number of higher income returns filed rose 80 percent compared with a 5-percent decrease for lower income returns filed. For business returns, the number of higher income individual returns increased about three times the rate of lower income business returns. Second, IRS’ audits in fiscal years 1997 through 2000 have continued to focus on EIC noncompliance, usually by lower income individuals. As table 7 shows, EIC audits, usually correspondence audits, accounted for a large percent of the audits of lower income taxpayers, regardless of their major source of income. In fact, the EIC portion of all audits for lower income taxpayers in fiscal year 2000 was more than double the fiscal year 1997 EIC portion of these audits. IRS officials also said that a project to address noncompliance by schedule C filers who claimed EIC explained the greater audit rates for lower income business filers compared with those with higher incomes during fiscal years 1999 and 2000. The specific effect of the recent decline in the audit rate on the level of voluntary compliance is not known. One reason is that IRS does not have current reliable information on the levels of voluntary compliance. IRS last measured overall income tax compliance for tax year 1988. IRS and others are concerned that changes in the tax laws, economy, and demographics since 1988 have made the compliance information out of date. Even if IRS had this information, IRS would still need to take a number of steps to try to determine the specific link between changes in audits and changes in voluntary compliance levels. Historically, measuring the specific impact of audit rate changes on voluntary compliance has been difficult. It is difficult to collect data on nonaudit factors that also can affect voluntary compliance levels, and then to control for these factors in order to isolate the impact of audit rate changes. For example, it is difficult to determine the effect of declining audit rates on voluntary compliance when IRS’ nonaudit checks could offset to some degree any negative effects of declining audit rates on compliance. Since the 1970s, for example, the underreporter program has grown, covering more types of income especially among nonbusiness taxpayers. IRS also uses the math error program to help ensure taxpayer compliance. Since the math error and underreporter checks can be similar to correspondence audits, growth in these programs may offset to some degree the decline in the audit rate. Furthermore, it has also been difficult to measure how improvements in assisting and educating taxpayers about their tax obligations compensate for declining rates. These IRS efforts, although not designed to find noncompliance, could help taxpayers to voluntarily comply. To the extent that education efforts succeed in promoting compliance, overall compliance would not necessarily decline if the audit rate declines. IRS has been allocating more resources to taxpayer assistance and education. One example is the increased use of revenue agents and tax auditors to provide taxpayer assistance. Because IRS does not have a measure of voluntary compliance, we do not know the net effects on tax compliance levels of the declining audit rates, changes in the volume of nonaudit checks, and any improvements in IRS’ educational efforts. On April 19, 2001, we received written comments on a draft of this report from the Commissioner of Internal Revenue (see app. IV.). The Commissioner said that IRS agrees with our presentation and analysis of the audit rate data as well as with the need for current and reliable data on voluntary compliance. The Commissioner agreed that changes in the economy and tax laws have rendered IRS’ compliance data obsolete. The Commissioner’s comments also expanded on what IRS officials told us during our work about the reasons for the audit rate decline. Specifically, the comments said that two provisions (sections 1203 and 1204) of the Restructuring and Reform Act of 1998 created a cautionary environment that led to audits taking longer. The Commissioner also said that the report did not acknowledge historical data in two studies--one by IRS and one by external researchers--on the effects of the decline in audit rates on voluntary compliance. We did not acknowledge these studies in the report because, while they estimate a relationship between audits and compliance, they are not based on current data. The most recent of the studies used data from 1982 through 1991. Because of the possibility that changes over time in the economy, tax laws, demographics, and IRS compliance programs have changed the relationship between audit rates and voluntary compliance, we did not cite the studies. These two studies did report a positive relationship between audit rates and voluntary compliance. This finding is consistent with the concern, which we describe in the report, that a decline in audit rates could lead to a decline in voluntary compliance. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this letter until 30 days from its date of issue. We will then send copies to Representative William M. Thomas, Chairman, and Representative Charles B. Rangel, Ranking Minority Member, House Committee on Ways and Means; Representative William J. Coyne, Ranking Minority Member, Subcommittee on Oversight, House Committee on Ways and Means, and Senator Charles E. Grassley, Chairman, and Senator Max Baucus, Ranking Minority Member, Senate Committee on Finance. We will also send copies to the Honorable Paul H. O’Neill, Secretary of the Treasury; the Honorable Charles O. Rossotti, Commissioner of Internal Revenue; the Honorable Mitchell E. Daniels, Jr., Director, Office of Management and Budget; and other interested parties. Copies of this report will be made available to others upon request. If you have any questions, please contact me or Tom Short at (202) 512- 9110. Key contributors to this report are acknowledged in appendix V. Appendix I: Individual Income Tax Audit Rate Trends for Fiscal Years 1996 Through 2000 TPI = total positive income (income from positive sources only) Schedule C-TGR = total gross receipts (profit or loss from business) Schedule F-TGR = total gross receipts (profit or loss from farming) TPI = total positive income (income from positive sources only) Schedule C-TGR = total gross receipts (profit or loss from business) Schedule F-TGR = total gross receipts (profit or loss from farming) Note 1: We combined two of IRS’ audit income levels ($25,000 to $50,000 and $50,000 to $100,000) into one income group ($25,000 to $100,000) because their audit rates were similar. Note 2: Returns filed in calendar years1995 through 1999 are used to compute fiscal years 1996 through 2000 audit rates. TPI = total positive income (income from positive sources only) Schedule C-TGR = total gross receipts (profit or loss from business) Schedule F-TGR = total gross receipts (profit or loss from farming) Business Lower income Higher income Combined nonbusiness and business Lower income Higher income Not applicable. The audit rate for lower income individuals is greater than the rate for higher income individuals for both nonbusiness and business filers in fiscal year 1999. However, when nonbusiness and business filers are combined into one category the audit rate for higher income individuals is greater than the audit rate for lower income individuals. This occurs due to the large number of lower income nonbusiness filers (57.4 million) compared to the number of lower income business filers (2.5 million). As a result, the combined audit rate for lower income individuals is more dominated by the audit rate for nonbusiness filers (1.18 percent) and less dominated by the audit rate for business filers (2.69 percent). Therefore, the combined audit rate only increases to 1.24 percent from the 1.18-percent audit rate for lower income nonbusiness filers. On the other hand, the difference between the number of higher income nonbusiness filers (7 million) and the number of higher income business filers (1.9 million) is not nearly as large as for lower income individuals. As a result, the combined audit rate for higher income filers is less dominated by the audit rate for nonbusiness filers (1.14 percent) and more dominated by the audit rate for business filers (2.40 percent) compared to lower income filers. Therefore, the combined audit rate increases to 1.40 percent from the 1.14-percent audit rate for higher income nonbusiness filers. Tax examiner totals for fiscal years 1998, 1999, and 2000 include service center and district office tax examiner totals. In addition to those named above, Helen Branch, Jay Pelkofer, Susan Baker, Michele Fejfar, Anne Rhodes-Kline, MacDonald Phillips, and Robert DeRoy made key contributions to this report. | The Internal Revenue Service (IRS) does various compliance checks to ensure the accuracy of information reported on taxpayers' returns. In recent years, the audit rate--the proportion of tax returns that IRS audits each year--has drawn attention because of a long-term decline in audit rates and the differences in audit rates for lower and higher income individuals. This report (1) describes the changes in audit rates for individual income tax returns overall and for categories, such as major sources (i.e., nonbusiness versus business) and levels of income for fiscal years 1996 through 2000; (2) discusses IRS' reasons and related data explaining the changes in audit rates; and (3) describes what is known about the effects of changes in the audit rates on tax compliance. In comparing fiscal years 1996 and 2000, GAO found that the overall tax audit rate of individuals declined about 70 percent. These rates declined regardless of the individual taxpayer's income level. IRS cited the following three reasons for the decline in audit rates for fiscal years 1996 to 2000: (1) the number of IRS auditors for individual returns declined by more than half due to a decline in total staff and decisions to change staffing priorities to focus on customer service; (2) the remaining auditors were used in other areas, such as assisting taxpayers; and (3) audits took longer due to additional audit requirements, such as more written communications with taxpayers about the status of their audit. To explain the changes in the audit rates by income levels, IRS officials cited increases in the number of high-income tax returns and an audit focus on noncompliance by earned income credit claimants, who are lower income individuals. Finally, neither IRS nor external observers know how the decline in audit rates affects voluntary tax compliance. |
Consistent with provisions of the Homeland Security Act, as amended, the Chief Intelligence Officer (CINT) of the department—who also holds the position of DHS Under Secretary for Intelligence and Analysis and is the head of I&A—is to exercise leadership and authority over the enterprise and intelligence policy throughout the department. For example, the CINT is to provide strategic oversight to and support the missions and goals of members of the enterprise. The enterprise is composed of all DHS component intelligence programs. Specifically, it consists of I&A, the intelligence elements of six DHS operational components (CBP, USCIS, ICE, TSA, the U.S. Coast Guard, and the U.S. Secret Service), and three DHS headquarters elements supported by I&A (the National Protection and Plans Directorate, the Office of Operations Coordination and Planning, and the Office of the Chief Security Officer). However, as shown in figure 1, not all members of the enterprise perform intelligence analysis activities. I&A emerged from what the Homeland Security Act originally established as the Directorate for Information Analysis and Infrastructure Protection. Specifically, the Homeland Security Act established within DHS the directorate—led by the Under Secretary of Homeland Security for Information Analysis and Infrastructure Protection—to carry out DHS’s responsibilities in regard to, among other things, information analysis. In 2007, amendments to the Homeland Security Act included in the Implementing Recommendations of 9/11 Commission Act of 2007 reorganized the department by dividing the directorate into an Office of Intelligence and Analysis—headed by the Under Secretary for Information and Analysis—and an Office of Infrastructure Protection, headed by the Assistant Secretary for Infrastructure Protection. As a result of the reorganization, responsibilities previously assigned to the Under Secretary for Information Analysis and Infrastructure Protection relating to intelligence analysis became attributed to the Secretary (as carried out through the Under Secretary for Intelligence and Analysis) and remained largely unchanged. I&A is an element of the Intelligence Community. According to its February 2011 strategic plan, I&A’s mission is to equip the homeland security enterprise with the intelligence and information it needs to keep the homeland safe, secure, and resilient. To carry out its mission, I&A is to ensure that information related to homeland security threats is collected, analyzed, and disseminated to homeland security partners to keep the homeland safe, secure, and resilient. I&A prepares written, finished analytical products that it makes available to its customers. In addition to written analytical reports, I&A provides intelligence analysis to its customers through oral briefings (both classified and unclassified) and other analytic services, such as access to I&A analysts for review of the customers’ intelligence products, and advice and instruction on tradecraft standards.provides services to state, local, tribal, and territorial partners, as well as the private sector, through deployed personnel at fusion centers. Additionally, I&A’s deployed personnel are to assist fusion centers in obtaining needed information and analysis to aid in supporting their own customer bases. I&A is also to act as a conduit between the DHS component intelligence programs and the Intelligence Community. For example, I&A provides relevant information from the Intelligence Community to the components. In 2010, we were mandated to identify programs, agencies, offices, and initiatives with duplicative goals and activities within departments and government-wide and report annually to Congress. In March 2011, February 2012, April 2013, and April 2014 we issued our first four annual reports to Congress in response to this requirement.describe, in part, areas in which we found evidence of fragmentation, overlap, or duplication among federal programs. Using the framework we established in these reports, we used the following definitions for the purpose of assessing the analysis activities of the enterprise: Fragmentation occurs when more than one federal agency (or more than one organization within an agency) is involved in the same broad area of national need. Overlap occurs when multiple programs have similar goals, engage in similar activities or strategies to achieve those goals, or target similar beneficiaries. Duplication occurs when two or more agencies or programs are engaging in the same activities or providing the same services to the same beneficiaries. DHS has established mechanisms—including a governance board, intelligence framework, and analysis planning process—intended to better integrate analysis activities across the enterprise and help ensure that activities support strategic departmental intelligence priorities. However, the framework does not establish strategic departmental intelligence priorities or drive the analytic planning process across the enterprise, as intended. On the other hand, intelligence officials from I&A and all five components in our review reported that efforts to implement these mechanisms, particularly the analytic planning process, have allowed them to coordinate component activities and avoid unnecessary overlap or duplication of efforts. Likewise, we did not find unnecessary overlap or duplication in our review of border security products from I&A and two components. According to the DHS Intelligence Enterprise Strategic Plan, an integrated and collaborative Intelligence Enterprise is crucial to the department accomplishing its homeland security mission. Further, according to the plan, removing existing barriers to integration, while concurrently respecting and supporting DHS components’ unique missions—such as CBP’s mission to protect the borders—is necessary in order to fully integrate the DHS components that perform intelligence analysis. In 2005, DHS established a governance board—the Homeland Security Intelligence Council (the council)—to serve as the decision-making and implementation oversight body that supports the CINT in leading and managing the activities of the enterprise, and furthering a unified, coordinated, and integrated intelligence program for the department. The council’s Analysis Working Group—composed of subject matter experts from I&A’s analytic divisions and the intelligence elements of the components—is to coordinate analysis and production across the enterprise. Additionally, DHS has established two key mechanisms to help ensure that component intelligence analysis activities and resource investments align to support strategic departmental intelligence priorities—the Framework and the POA. According to the Framework, this is an annual document that is intended to present the department’s overall strategic intelligence priorities and to inform annual intelligence planning decisions related to collection, analysis, and resource management. In addition to the Framework, the POA is to be an annual document that DHS uses to plan and manage the more specific intelligence analysis activities of its enterprise. Specifically, according to DHS guidance, the POA is to identify a series of key intelligence questions that the enterprise will seek to address in the upcoming year. According to the 2011 I&A Strategic Plan and DHS guidance, the strategic priorities in the Framework should be used to drive analytic planning and production in the POA. However, neither mechanism is working as intended. Specifically, we identified two gaps in the implementation of these mechanisms that limit DHS’s assurance that component analytic activities support strategic departmental intelligence priorities: The Framework does not establish strategic departmental priorities: The Framework does not establish strategic departmental intelligence priorities that can be used to inform annual intelligence planning decisions—such as what analytic activities to pursue and what level of investment to make, as called for in DHS guidance. The Framework—developed by I&A’s Information Sharing and Intelligence Enterprise Management Division—was established in fiscal year 2011. However, the senior I&A official who assumed management of the Framework process in May 2013 stated that the Framework does not accomplish its intended purpose of informing annual intelligence planning decisions. According to this official, DHS took a bottom-up approach to developing the Framework in fiscal years 2011 through 2013. Specifically, DHS created a matrix that links the broad departmental missions outlined in the Quadrennial Homeland Security Review—such as preventing terrorist attacks and effectively controlling U.S. borders—to different intelligence categories that are relevant to these missions.points and asked the components to distribute these points across the QHSR missions and intelligence categories in accordance with the types of intelligence activities the component conducted. I&A then added up the total points assigned by the components for each intelligence category. These totals became a proxy for the intelligence priorities of the enterprise as a whole. Accordingly, the Framework presented the existing intelligence activities of the members of the enterprise, rather than outlining strategic departmental intelligence priorities. DHS allocated each component 100 According to officials responsible for the Framework, the goal of the fiscal years 2011 through 2013 Frameworks was to understand the diverse nature of intelligence priorities across the enterprise, not to establish overall departmental intelligence priorities. Accordingly, the officials acknowledged that the Framework does not fulfill its stated purpose of informing annual intelligence planning decisions. To begin addressing this problem, the officials stated that I&A plans to make some changes to the fiscal year 2014 Framework. However, the planned changes relate primarily to the methodology used to tally the current intelligence activities of the components. Specifically, the officials said that rather than giving all components equal weighting, the fiscal year 2014 Framework will give greater weight to the priorities of components that have a large intelligence portfolio. However, the official stated that the fiscal year 2014 Framework will look similar to the fiscal year 2013 Framework. planned changes to the fiscal year 2014 Framework will not result in the establishment of strategic departmental intelligence priorities that can be used to inform annual intelligence planning and resource decisions, as called for in DHS guidance. As of March 2014, the fiscal year 2014 Framework had not been finalized. The Framework does not drive analytic planning and production: The Framework was not used to drive analytic planning and production in the POA. The POA, established in fiscal year 2011, is overseen by the Homeland Security Intelligence Council’s Analysis Working Group. As with the Framework, DHS took a bottom-up approach to developing the POA for fiscal years 2012 and 2013. Specifically, each component independently developed a set of key intelligence questions that it would address in the upcoming year. Officials from each of the five components we contacted said that they developed these questions in order to be responsive to their unique missions and customer needs. DHS then aggregated these individual component responses to generate the POA. For the 2013 POA, this resulted in a catalogue of more than 80 questions to be addressed. DHS modified the process for the fiscal year 2014 POA in order to focus analytic planning efforts on a smaller, more strategic set of priority questions of common interest to the enterprise as a whole. As a result, the number of key intelligence questions decreased to 15. Also, rather than having each component individually generate key intelligence questions for inclusion in the POA, subject matter experts from participating DHS components brainstormed to collectively develop the 2014 key intelligence questions, determine which DHS components would contribute analysis to each question, and identify what specific intelligence products each would generate to address the questions. According to the I&A official responsible for the POA, the brainstorming was informed by the subject matter experts’ daily engagement with DHS leadership, knowledge of operational and policy priorities and missions, the current threat environment, customer-specific requirements, and Intelligence Community and law enforcement engagement and collaboration. However, given the Framework limitations described above, and because of the new focus of the fiscal year 2014 POA, the official stated that the 2014 process was not informed or influenced by the Framework. Because of the gaps we identified in the Framework and POA processes, DHS cannot provide reasonable assurance that component intelligence analysis activities and resource investments throughout the enterprise are aligned to support both strategic departmental intelligence priorities as well as component-specific priorities driven by their unique operational missions. According to I&A officials, it can be challenging for components to focus on both overall strategic departmental intelligence priorities and their more tactical intelligence priorities that support their specific operations and component customers. As articulated in the DHS Intelligence Enterprise Strategic Plan, the CINT recognizes the importance of respecting and supporting these unique component intelligence missions. However, to achieve its strategic goal of establishing an integrated intelligence enterprise, it will be important for DHS to clearly define its departmental intelligence priorities and ensure that annual enterprise intelligence planning activities are aligned to support these priorities. Establishing strategic departmental intelligence priorities in the Framework, and using these priorities to inform the planned analytic activities of the enterprise, as articulated in the POA, would better assure DHS of this alignment. DHS’s efforts to implement mechanisms—particularly the POA—have helped to minimize unnecessary overlap and duplication across component intelligence analysis activities. The intelligence analysis activities within DHS are inherently fragmented, as multiple DHS components conduct intelligence analysis in support of the same broad intelligence topics. For example, I&A, CBP, ICE, TSA, and the U.S. Coast Guard all contributed intelligence analysis to the DHS goal of Securing and Managing Our Borders in fiscal year 2012, as articulated in that year’s POA. However, intelligence officials at I&A and all five operational components with whom we met stated that the mechanisms DHS instituted to integrate analysis throughout the enterprise—particularly the POA—helped to avoid unnecessary overlap and duplication. Specifically, senior intelligence officials in all five components said the POA helped them gain a better understanding of the work of other components or prevent duplication in the intelligence analysis activities of the components. For example, U.S. Coast Guard intelligence officials stated that through the POA process, they gained an understanding of what other DHS components were doing in the intelligence realm to meet their mission requirements and learned that they may have some areas of commonality with CBP. This understanding allowed U.S. Coast Guard analysts to reach out to CBP analysts to determine which component would lead analysis work on different topics, thereby preventing duplication. Specifically, the U.S. Coast Guard agreed to lead maritime border analysis work and CBP agreed to lead land border analysis work. Intelligence officials from another component stated that because of the POA, departmental components have developed a relationship such that two components conducting similar analysis will work together to determine which component is better suited to do the analysis. Officials from four of the five DHS components we met with also reported that the revised POA process used in fiscal year 2014 was an improvement over the process used in prior years for various reasons. The reasons most often mentioned were improved coordination and reduced redundancy, and the ability of the components to display their contributions and inject the components’ internal needs into the broader priorities included in the POA. Additionally, DHS is considering additional ways to enhance coordination across its enterprise, according to I&A officials. For example, officials said that DHS is exploring ways to have the Homeland Security Intelligence Council’s Analysis Working Group— the group currently responsible for developing the POA—play a more active role in sharing information on planned analytic production throughout the enterprise and in monitoring joint production across the components. In addition, we did not find evidence of any duplication among the intelligence analysis products that we reviewed as part of our case study in the area of border security. We did identify one instance of overlap where two DHS components provided similar information to similar customers in different products, but the information was packaged differently such that the products were not duplicative. This result is consistent with our determination based on our case study review, which senior intelligence officials at I&A and the components confirmed, that I&A products tend to be more strategic and written to a broader customer base than are component intelligence products, which tend to be more tactical and written to component operators. We also found that I&A products tend to be based on multiple sources of information, including DHS component data and information from the Intelligence Community, whereas component products tend to be sourced primarily from internal component data. In this way, I&A products aggregate information from across the department and the Intelligence Community to present a national picture, whereas component products use component information to discuss the implications on component operations. For example, a component intelligence product may provide analysis of a specific law enforcement incident, such as a drug seizure in a specific jurisdiction, whereas an I&A product may provide a strategic look at drug activity across the country using data from CBP, ICE, and international law enforcement sources. Results from customer feedback surveys that are attached to I&A intelligence products indicate general satisfaction with these products, but the results have limitations that prevent I&A from drawing readership-wide conclusions. Our discussions with representatives of I&A’s five customer groups indicate that customers in two groups—DHS leadership and state and local officials at fusion centers—found I&A’s products to be useful, while customers in the other three groups—DHS components, the Intelligence Community, and private critical infrastructure sectors— generally did not. Customers in four of the five groups reported that they found other types of I&A analytic support to be useful, such as briefings from I&A analysts. I&A is taking steps to identify the analytic products and services that best meet its customers’ needs. I&A assesses the accuracy and usefulness of its intelligence analysis products through voluntary customer feedback surveys that are attached to products. Survey results indicate general satisfaction, but the results have limitations that prevent I&A from drawing statistically significant conclusions about all the readers of its products. Specifically, during fiscal year 2013, 73 percent of I&A customers that responded to the feedback surveys were very satisfied with the products’ usefulness, and 68 percent were very satisfied with the products’ timeliness. During this period, I&A issued a total of 830 intelligence products, of which 467 included a customer feedback survey. I&A received survey responses for 104 (22 percent) of the 467 products, with a total of 4,162 responses. According to ODNI officials, I&A’s level of effort directed at obtaining feedback from its customers, along with the number of responses received, were above the norm for the Intelligence Community. While generally positive, I&A survey results have limitations due in part to a bias in those who respond to the surveys. This bias exists because the results reflect the views of I&A customers who chose to respond to the feedback form and do not reflect the views of those who did not respond, a fact that prevents I&A from drawing statistically significant conclusions about the usefulness of its products since the results are not representative of all customers. Also, certain products concerning particularly notable topics can also account for a disproportionate number of survey responses received, which can bias the overall results since the responses do not reflect the full range of issued products. For example, in September 2012, I&A data show that three intelligence products accounted for 513 (88 percent) of the 586 total survey responses received during the month. The remaining 73 responses (12 percent) I&A received during the month covered a total of 18 products. Further, because I&A’s products are distributed through broad information-sharing networks and generally not sent directly to individual recipients, I&A cannot determine the full extent of its customer base. I&A officials agree that this distribution method limits its ability to form readership-wide conclusions based on the survey results; however, I&A officials stated that I&A uses these networks to further its goal for a wide distribution of its products. Additionally, because of the broad distribution through information sharing networks, I&A cannot track the number of individuals who ultimately open and read its products in order to obtain more complete feedback. All of these factors limit I&A’s ability to draw any readership-wide conclusions about its products. Despite these limitations, I&A officials view the survey results as a useful tool for generally understanding how their products are being received by the customer base. The surveys also allow customers to submit product and service improvement recommendations that help to inform I&A in making adjustments to products to better serve customers’ needs. Officials from I&A’s Production Management Division said they provide the survey results to the I&A analytic division responsible for the product in order for the division to consider and make improvements in future products. In addition, I&A formed the Analytic Production Improvement Council in fiscal year 2012 to implement product quality improvement initiatives. For example, one product improvement effort the council implemented was to change product titles to better inform prospective customers of the topic of interest in those products. I&A has established portals on web-based distribution networks to facilitate customers’ access to its intelligence products, such as the Homeland Security Information Network for unclassified products and the Homeland Secure Data Network for classified reporting. According to I&A’s strategic plan, because of resource constraints and other factors, I&A gives priority in its intelligence analysis efforts to the needs of DHS leadership and state and local customers. I&A officials added that the priority order of customer groups is (1) departmental leadership; (2) state, local, tribal, and territorial partners; (3) DHS operational components; (4) Intelligence Community members; and (5) private critical infrastructure sectors. Our work indicates that I&A’s finished intelligence products are useful and relevant to DHS leadership and state, local, tribal, and territorial customers, but generally not to customers in the other three groups. Officials from four of the five customer groups (state, local, tribal, territorial partners; DHS components; the intelligence community; and the private sector) said they found other types of I&A analytic support to be useful, including briefings on emerging threats, tradecraft review of the customers’ own products, assistance with developing intelligence analysis, training, and the general exchange of information. I&A’s first priority is to provide finished intelligence products and other analytic services to DHS leadership—which consists of the Secretary of Homeland Security, Deputy Secretary, and DHS component heads—to support decision making. According to I&A officials who brief the Secretary, I&A’s products and services are useful and relevant to meeting leadership needs. For example, I&A’s daily intelligence report and related threat briefing to the Secretary include actions DHS components plan to take in response to threats, which helps the Secretary ensure that the department is taking actions to mitigate these threats. I&A has given priority to providing intelligence reports and other support to state, local, tribal, and territorial entities, consistent with provisions of the Homeland Security Act, as amended, and subsequently enacted laws.that I&A products were useful for providing an overall, strategic-level perspective and situational awareness regarding emerging threats. The Officials from five of the seven fusion centers we contacted said officials also said they incorporate threat analysis from I&A products into their own reporting to state and local customers. They specifically cited I&A’s Joint Intelligence Bulletins and Roll Call Releases as products that were useful. Officials at the two fusion centers that generally did not find I&A analytic products to be useful said the products rarely addressed concerns that were unique to their geographic area, such as domestic terrorist groups that operate in their regions. In addition, officials from all seven fusion centers said that they valued access to I&A personnel, including analysts, for assistance with fusion center needs related to intelligence analysis. This assistance included training on proper methods for developing analytic products, reviewing fusion center products, and providing other analytic services, such as oral briefings on emerging threats that may be relevant to the center. I&A has deployed analysts to fusion centers on a limited basis. Specifically, I&A initially deployed one analyst to the Los Angeles and San Diego fusion centers. According to I&A officials responsible for FAST, the purpose of these embedded analysts is to work with fusion centers to do detailed analytic work in accordance with the tradecraft standards that I&A follows.officials noted that these analysts provide an avenue for improving the tradecraft skills of fusion center analysts, which could help reduce delays in the issuance of I&A’s joint products with centers. I&A has recently taken steps to deploy additional analysts to fusion centers. According to the I&A Deputy Research Director, I&A deployed an additional four analysts to fusion centers and plans to deploy a fifth analyst in June. Intelligence officials from all five DHS components we contacted generally stated that they did not consider themselves customers of I&A with regard to finished intelligence products. The officials recognized the overall high- level perspectives and situational awareness that I&A products can provide, but noted that their operational elements generally have a more critical need for tactical information and analysis to support operations and investigations—such as details on specific individuals or groups— which their own intelligence groups typically provide. Officials from all five components noted the value of I&A’s other analytic support, such as providing access to the Intelligence Community and training. Managers of the intelligence elements of two components (the U.S. Coast Guard and USCIS) cited a desire to have I&A analysts placed in the components’ intelligence offices to allow for better exchanges of information, analysis, and subject matter expertise. For example, according to U.S. Coast Guard officials, colocating I&A analysts within components would allow I&A to better serve the Secretary by facilitating the identification of relevant information and analysis. In January 2014, I&A officials said that they were in the process of developing and considering several plans to deploy analysts to jointly work on high-priority issues of mutual interest to I&A and components. ODNI officials said that they generally do not perceive the members of the Intelligence Community to be customers of I&A’s finished intelligence products because the products are not targeted to the Intelligence Community elements, and these agencies rely on other sources for analysis. The officials noted that Intelligence Community elements also tend to have an international focus, while I&A focuses on the homeland. ODNI officials said that they appreciated that I&A serves state and local customers since they are not traditional Intelligence Community customers. The officials also noted that I&A frequently writes its products at an unclassified level, which hinders their usefulness to the Intelligence Community. According to ODNI officials, the daily interaction between their personnel and I&A analysts embedded at ODNI’s National Counterterrorism Center provides expertise and perspective on DHS data sets—such as data on individuals crossing the borders—that are valuable to the Intelligence Community. The officials noted that they value being able to integrate information from these data sets into their own analyses. Representatives from all nine private critical infrastructure sectors and subsectors we contacted said that I&A generally does not generate products that are either useful or relevant to them, except for I&A’s analytic efforts related to cyber security, for which all of the officials generally had positive views. The officials said that I&A’s finished intelligence products are usually more strategic than they require, and I&A does not fully understand the intelligence needs of their industries. The officials added that they are interested in developing a relationship with I&A. For example, officials from all nine sectors noted that having I&A analysts detailed to the private sector on a limited and temporary basis would better position I&A to understand their industries and related threats and provide intelligence analysis, including written reports, oral briefings, and other direct assistance. In January 2014, I&A officials said they are not considering long-term deployments (90 days or longer) in part because of staffing constraints, but are considering shorter-term deployments to interested private sector companies. According to I&A officials, I&A has historically given the private sector lower priority with respect to intelligence analysis than its other customer groups because of resource considerations. However, the officials stated that I&A is continuing efforts going forward to perform outreach to the private sector to better determine private sector’s analytic needs and how to satisfy them. For example, I&A is participating in the Federal Bureau of Investigation’s (FBI) Infragard program for sharing information and analysis with the private sector. Further, I&A’s Strategic Plan calls for enabling the National Network of Fusion Centers to serve as a conduit for disseminating products to private sector partners. I&A has not identified the types and mix of analytic products and services that best meet the needs of its various customers, but has recently taken steps that could support doing so. Specifically, the surveys I&A attaches to individual products help obtain customer feedback on various aspects of the products, but are not intended to obtain customer views on the full range of analytic products or services that I&A provides. However, in fiscal year 2012, I&A’s Production Management Division initiated a Core Customer Study to ascertain, among other things, its core customers’ organizations, the functions they perform, and their intelligence-related mission needs. The study is using a combination of interviews and surveys with members of I&A’s five customer groups. I&A characterized the Core Customer Study as a long-term effort to define, identify, and profile its core customers as a step to better identify their overall intelligence needs. Production Management Division officials said they expect to complete the study by the end of June 2014. In addition, in January 2014, I&A initiated the Customer Satisfaction Assessment to obtain views on I&A products and services that were provided to customers during 2013 and forecast how I&A can better meet customer needs in the future. To support this assessment, and in response to our September 2012 report, I&A initiated a survey in March 2014 to gain a better understanding of its customers’ satisfaction with and use of I&A intelligence products and services.survey covers the types of I&A products the customers read in 2013, including finished intelligence and current intelligence (e.g., alerts and Among other things, the warnings); the extent to which the products were useful in informing actions and decisions, such as in making resource investments; and satisfaction with I&A services, such as threat briefings and tradecraft assistance. According to I&A officials, they sent the survey to customers who were likely recipients of I&A intelligence products and services in 2013. Specifically, I&A sent the survey to customers who maintained an active account to a portal where I&A products are posted (e.g., the Homeland Security Information Network), were members of a homeland security information-sharing network that regularly distributes I&A products (e.g., terrorism liaison officer network), or were individually referred by their local fusion center or the I&A private sector outreach program. According to I&A officials, the results of this survey will be analyzed and provided directly to I&A intelligence analysts, managers, and senior leadership to help them better understand how their customers use I&A intelligence and determine how to tailor and improve products and services in the future. I&A officials said that as part of the Customer Satisfaction Assessment they also plan to conduct detailed interviews with a smaller group of customers in order to obtain a more in-depth understanding of ways to increase the usefulness of I&A’s analytic products and services that will guide its improvement efforts. According to I&A officials, the assessment will also help I&A understand its customers’ desired mix of products and analytic services. I&A plans to complete the first draft of a report on the results of the assessment by the end of April 2014. Once completed, the Core Customer Study and Customer Satisfaction Assessment could provide I&A with information to help it more clearly define how it plans to support each customer group, the types of analytic products and services it will provide, and related levels of investment. The results could also assist I&A in determining how best to use its available analytic resources, including deciding whether additional deployments of I&A intelligence analysts to DHS components, fusion centers, and private sector entities would allow for better exchanges of information and analysis in support of national counterterrorism efforts. I&A has faced human capital challenges in recruiting and hiring the skilled workforce it needs and providing training and professional development opportunities that keep morale high and attrition low. Specifically, according to I&A’s recruitment strategy, I&A’s hiring authority under competitive service put it at a disadvantage compared with other organizations that were able to process hiring actions more quickly and For offer career advantages associated with excepted service status. example, according to officials, under competitive service status, it could take anywhere from several months to a year between the initial hiring offer and completion of the hiring process, a time lag that they said led many applicants to seek employment elsewhere. The officials added that when they engaged in Intelligence Community career fairs, many individuals were looking for offers on the spot that other Intelligence Community elements could provide but that I&A could not, a fact that affected their ability to compete for top talent. In addition, according officials, the perception of I&A as a less than prestigious intelligence agency than the others in the Intelligence Community also affects its ability to recruit top talent. To some extent, such hiring challenges contributed to I&A’s historical reliance on contractors to fill the gaps in Intelligence Community experience. For example, a 2009 Homeland Security Institute study concluded that burdensome recruitment and hiring processes resulted in I&A’s heavy reliance on contract labor to staff key positions. I&A officials stated that the use of contractors was the primary method outside of federal hiring to acquire the needed tradecraft skills. They added that budgetary limitations on the number of authorized full-time equivalent (FTE) positions, the desire for operational flexibility to deal with uncertainty, and the need for “surge capability” also made contractors a viable alternative. As of November 2013, I&A’s Office of Analysis had the smallest percentage of contractors compared with the rest of I&A, but the office still relied on contractors to support analytical instruction, such as reviews of the Program of Analysis. I&A has also faced challenges in providing professional development opportunities for its workforce, and experienced low morale scores and high rates of attrition, particularly among its lower-level analysts. Regarding professional development, I&A historically did not institutionalize a commitment to investing in its workforce, according to I&A officials. For example, they said that some managers sent their employees to training, assigned mentors, and provided rotational opportunities, while others did not. Furthermore, I&A’s training program did not always focus on mission-specific requirements that its workforce needed, according to officials. The 2009 Homeland Security Institute report also concluded that I&A lacked a robust training program. According to officials, these challenges contributed to I&A’s low morale and high rates of attrition. For example, I&A employee responses to the 2012 Intelligence Community Climate Survey show that 36 percent of employees responded positively when asked if they would recommend I&A as a good place to work, as shown in figure 2. Also, in September 2012, we reported that I&A had one of the lowest morale scores within DHS in OPM’s Federal Viewpoints Survey—a tool that measures employees’ perceptions of whether and to what extent conditions characterizing successful organizations are present in their agencies. According to I&A officials, attrition among I&A intelligence analysts has led to a loss in valuable technical expertise and organizational knowledge. I&A data show that attrition has been the highest for lower- level analysts, with attrition for GS-9 positions fluctuating from a high of 80 percent in fiscal year 2009 to 21 percent in fiscal year 2013. According to exit interviews, I&A officials attributed the attrition to a number of factors, including low morale and limited promotion opportunities for lower-level analysts. The officials added that exit interviews indicate that analysts leave for positions with other Intelligence Community agencies, which are often perceived as higher-profile agencies. To help address these challenges, I&A has taken the following steps: To help address recruitment and hiring challenges, in November 2012, DHS requested excepted service status on behalf of I&A from ODNI and OPM to align itself with the rest of the Intelligence Community and better compete for top talent. In June 2013, I&A received excepted service status. According to I&A officials, this will allow them to process hiring actions quicker and offer career advantages associated with excepted service positions, which will in turn help them recruit and hire the right applicants. In addition, I&A human capital officials stated that they started working with DHS human capital officials in 2013 to improve intelligence analyst position descriptions by, for example, targeting specific skill requirements, to help ensure that I&A recruits the right people. To address low morale and attrition among its intelligence analysts and enhance professional development opportunities, I&A restructured its organizational grade structure to provide more career advancement and promotion opportunities. In 2012, I&A awarded a contract to the MITRE Corporation to review its workforce structure. MITRE concluded that I&A had more staff at the higher grade levels and that this was out of line with the rest of the Intelligence Community. As a result, in November 2012, I&A released a memorandum that noted the grade structure changes. Specifically, I&A increased the number of GS-7 through GS-12 analyst positions and decreased the number of GS-13 through GS-15 positions. According to the directors of the four Office of Analysis analytic divisions, this change is intended to help I&A grow its workforce through junior analysts and increase promotion opportunities that were previously not available. In fiscal years 2012 and 2013, I&A provided over 20 courses related to analytic tradecraft for its analytic workforce. According to I&A officials, these courses were developed in accordance with ODNI standards. were detailed to other Intelligence Community elements or DHS components. According to I&A officials, this program helps I&A analysts better understand the operations and needs of other Intelligence Community elements and DHS components and helps them develop their analytic tradecraft. In May 2013, I&A began to strategically assess whether its workforce has the right skills—such as written communications, program knowledge, and decision making—through a competency gap assessment. According to officials, this gap assessment will also to serve as an opportunity for I&A to determine whether its current practice of recruiting and hiring individuals that have strong analytical skills as opposed to subject matter expertise, and then training them to specialize in areas that According to support DHS’s unique operational missions, is appropriate.officials, the gap assessment will include a survey of the knowledge, skills, and capabilities of intelligence analysts across the enterprise. The survey will cover critical skills and competencies that I&A developed in conjunction with other DHS components. According to I&A officials, the results of the assessment are to be released in fiscal year 2014. I&A officials said that they intend to use the results of the assessment to help fill skill gaps within the current workforce through training and analyst performance management activities. I&A officials acknowledged the importance and the need to use the results of the competency gap assessment given the absence of other mechanisms or documents that can help guide its overall workforce planning efforts. While the competency gap assessment is an important step in strategic workforce planning, I&A has not assessed the effects of its recently completed workforce efforts. In the past 3 years, I&A has implemented changes that are intended to strengthen its workforce, however; I&A currently has no established mechanisms in place to monitor and evaluate the effect of these efforts and use the results to make any needed changes. Specifically, we identified three areas where I&A could strengthen the monitoring and evaluation of its human capital efforts. First, I&A officials stated that they use ODNI and DHS-OPM surveys and survey forms from analysts regarding training to review changes in morale; however, I&A does not have a systematic way of monitoring these activities and using the results to make changes. Second, the results I&A receives from the ODNI and DHS-OPM morale surveys reflect all of I&A, which makes it difficult for I&A to determine whether its workforce activities are having any results on the intelligence analysts within the Office of Analysis. Third, I&A does not have any mechanisms in place to monitor and assess the effects of its excepted service status and grade restructuring. According to I&A officials, these mechanisms are not in place because I&A leadership was focused on other priorities. However, these officials agreed that such mechanisms could help ensure that its efforts are helping to address the challenges of its analytic intelligence workforce. According to DHS workforce planning guidance, workforce planners must monitor and assess the progress of their strategies, evaluate the outcomes, and use the results from the evaluation to revise the strategies, if needed. Consistent with this guidance, establishing mechanisms to monitor and evaluate workforce efforts, such as changes to I&A’s hiring authority and grade levels restructuring, could help the Office of Analysis determine if the efforts are achieving their intended results. In addition, using the results from these evaluations to determine any need for changes will help ensure that I&A is making sound workforce decisions. The organizations within the DHS Intelligence Enterprise play critical roles in uncovering and analyzing threats to the United States. DHS has made efforts, such as developing the Framework and POA, to integrate the analysis activities of its enterprise in support of common departmental intelligence priorities. These efforts, however, are not functioning as intended and as a result, DHS has not yet reached its goal of having an intelligence enterprise that responds to an integrated set of strategic departmental priorities that in turn drive analytical plans and resource decisions. Because of challenges I&A experienced in recruiting and hiring a skilled workforce, it has also taken actions intended to make it easier to attract, develop, and maintain skilled analysts, and many of these actions have been recently completed. By not monitoring and evaluating the effect of these actions, I&A cannot be confident that it is making progress in improving its ability to build and maintain the workforce it needs to effectively and efficiently analyze possible threats to the homeland. Furthermore, I&A lacks the information essential to know whether additional changes are needed to its workforce improvement activities and the strategies that underlie those activities. To help ensure that the intelligence analysis activities and resources throughout the enterprise align to an integrated set of strategic departmental intelligence priorities, we recommend that the Under Secretary for Intelligence and Analysis, Homeland Security take the following two actions: establish strategic departmental intelligence priorities in the Homeland Security Intelligence Priorities Framework that can be used to guide annual enterprise planning efforts, including intelligence analysis and resource management and ensure that once strategic departmental intelligence priorities are established, the Framework is used to inform the planned analytic activities of the DHS Intelligence Enterprise, as articulated in the Program of Analysis. To help ensure that I&A maintains critical skills and competencies, when planning for and implementing current and future workforce actions, we recommend that the Secretary of Homeland Security establish mechanisms to monitor and evaluate workforce initiatives and use results to determine any needed changes. We provided a draft of this report to DHS for review and comment. In its written comments, reprinted in appendix V and summarized below, DHS concurred with the three recommendations and described actions underway and planned to address them. In addition, DHS provided technical comments, which we incorporated as appropriate. With regard to the first recommendation, that I&A establish strategic departmental intelligence priorities in the Framework that can be used to guide annual enterprise planning efforts, DHS concurred and stated that, based in part on GAO’s work, I&A has revised its approach to developing strategic departmental intelligence priorities. Further, DHS stated that the 2014 Framework was approved on April 9, 2014, and I&A will coordinate future refinements to the methodology with the Homeland Security Intelligence Enterprise prior to the beginning of the 2015 priorities process. We will determine if I&A’s revised approach and the resulting 2014 Framework address the intent of our recommendation as part of our recommendation follow-up process. With regard to the second recommendation, that I&A ensure that once strategic departmental intelligence priorities are established the Framework is used to inform the planned analytic activities of the DHS Intelligence Enterprise, DHS concurred, and stated that the priorities established in the 2014 Framework will inform the development of the 2015 POA. Additionally, DHS stated in its letter that the 2015 POA will be informed by updated Intelligence Community-wide guidance from the Director of National Intelligence pertaining to both the development of the POA and key intelligence questions, and lessons learned from last year’s development process. If fully implemented, these planned actions should address the intent of the recommendation. With regard to the third recommendation, that I&A establish mechanisms to monitor and evaluate workforce initiatives and use results to determine any needed changes, DHS concurred and stated that I&A is developing specific performance indicators to monitor and evaluate workforce initiatives and plans to develop a complete set of measures by the end of fiscal year 2014. If fully implemented, these planned actions should address the intent of the recommendation. We are sending copies of this report to the Secretary of Homeland Security, the Undersecretary for Intelligence and Analysis, and interested congressional committees as appropriate. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-8777 or larencee@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributions to this report are listed in appendix VI. The Intelligence Community consists of 17 U.S. intelligence agencies whose mission is to collect and convey essential information that the President and members of the policymaking, law enforcement, and military communities require to execute their appointed duties. See 50 U.S.C. § 3003(4).The agencies include the following: Department of Energy, Office of Intelligence and Counterintelligence Department of Homeland Security (DHS), Office of Intelligence and Analysis (I&A) Appendix II: Description of Key Finished Intelligence Products from the Six DHS Intelligence Entities in Our Review Finished Intelligence Product Homeland Intelligence Today DHS’s daily product that provides the Secretary, DHS operational component leadership, and other federal partners with timely homeland- relevant analysis and reporting. Provides strategic analysis on a variety of topics that impact homeland security; can range from covering a single issue to an in-depth multi- themed analysis. Produced jointly with the Federal Bureau of Investigation and focused on emergent or recent terrorism-related threats or events that are pertinent to the Homeland. Produced for frontline law enforcement officers to highlight emergent terrorist or criminal techniques or tactics they may encounter in the field. A sister product to Roll Call Release that is geared toward fire, rescue, and emergency management personnel to aid in planning, response, and mitigation. Provides an informational overview on an issue, development, key figure, or lasting trend. Provides a comprehensive analysis of threats and hazards that impact CBP operations. Provides analysis, trends, and patterns that directly relate to CBP efforts. Provides research, primarily associated with travel and imports, to other law enforcement agencies and the Intelligence Community. Provides DHS and other members of the Intelligence Community with CBP analysis of terrorism and other criminal activities that affect border security. Homeland Security Assessment (HSA) Provides a comprehensive analysis of a complex intelligence topic or target related to specific events or trends. HSA’s typically include a threat analysis; a discussion or potential threat actors and tactics, techniques, and procedures; and an outlook or review of potential mitigation strategies, counter-measures, or vulnerabilities. Provides in-depth analysis of a single intelligence development, discovery, finding, or issue. Covers new intelligence and summarize developments on an issue or topic area of interest to ICE customers. Provides information and analysis on a single threat or situation involving transportation security. Provides analyses of threats to transportation’s critical modal sectors, such as aviation and mass transit. Posters focus on a single topic of concern, such as concealment techniques. Analyzes how violent extremists, typically overseas, attack transportation modes in order to discern impacts on the homeland transportation environment. Product Description Provides information on individual subjects such as their immigration history, current immigration status/expiration, previous and pending immigration benefits, and a list of immigration systems checked. Contains the information in an individual’s Immigration Systems History, as well as additional information on the evolution of the subject’s immigration history. Provides in-depth analysis and judgments of an event or development previously addressed in a previous product. Provides immediate maritime intelligence and law enforcement situational awareness. An Analytic product providing judgments and prognosis regarding a specific intelligence issue. Provides an overview and summary judgments on an intelligence issue based on all-source intelligence information. This report addresses the following questions: To what extent are the intelligence analysis activities of the enterprise integrated to support departmental strategic intelligence priorities, and to what extent are enterprise analysis activities unnecessarily overlapping or duplicative? To what extent do I&A’s customers report that they find I&A products and other analytic services to be useful? What challenges does I&A face in maintaining a skilled analytic workforce and what steps has it taken to address these challenges? To address the first objective, we analyzed documents produced by DHS and I&A related to setting intelligence priorities to determine how priorities are established and how the priorities are to be followed by the enterprise. Specifically, we reviewed the Department of Homeland Security Strategic Plan, Fiscal Years 2012-2016, the 2006 and 2008 versions of DHS Intelligence Enterprise Strategic Plan; Office of Intelligence and Analysis Strategic Plan Fiscal Year 2011–Fiscal Year 2018; Delegation to the Under Secretary for Intelligence and Analysis/Chief Intelligence Officer, August 10, 2012; DHS Management Directive 8110, Intelligence Integration and Management, January 30, 2006; Department of Homeland Security Directive 264-01, Intelligence Integration and Management, June 12, 2013; DHS Instruction Number: 264-01-001 DHS Intelligence Enterprise June 28, 2013; the 2005 and 2009 versions of the Homeland Security Intelligence Council Charter; I&A memos including the fiscal year 2013 and the fiscal year 2014 DHS Enterprise Programmatic Guidance; DHS Intelligence Enterprise Programs of Analysis (POA) for fiscal years 2011 through 2014; and the Homeland Security Intelligence Priorities Framework for fiscal years 2012 and 2013 to determine the roles and responsibilities of DHS in managing its enterprise and the analytic efforts of the enterprise, and how DHS was carrying out these responsibilities. Additionally, we analyzed documents from DHS components for which intelligence analysis is a core function in supporting their unique missions and operations—the U.S. Coast Guard, Transportation Security Administration (TSA), U.S. Customs and Border Protection (CBP), U.S. Immigration and Customs Enforcement (ICE), and U.S. Citizenship and Immigration Services (USCIS)—to help determine the roles and responsibilities of component agency intelligence programs and how they respond to the coordination and prioritization efforts of I&A. These included U.S. Department of Homeland Security Annual Performance Report, Fiscal Years 2011–2013, Appendix A: Measure Descriptions and Data Collection Methodologies; National Infrastructure Advisory Council, Intelligence Information Sharing, Final Report and Recommendations, January 10, 2012; Transportation Security Administration Office of Intelligence and Analysis Mission and Functional Statement; ICE Strategic Plan, FY 2010-2014; U.S. Immigration and Customs Enforcement, Homeland Security Investigations, Office of Intelligence, Analysis Division, Fact Sheet; Secure Borders, Safe Travel, Legal Trade, U.S. Customs and Border Protection, Fiscal Year 2009-2014 Strategic Plan; United States Citizenship and Immigration Services Intelligence and Information Products Fact Sheet; and U.S. Coast Guard Publication 2-0, Intelligence, May 2010; among others. We compared these efforts against requirements listed in the Homeland Security Act of 2002, as amended, that call on the DHS Chief Intelligence Officer (CINT) to establish the intelligence analysis priorities for the intelligence components of the department, consistent with any directions from the President and, as applicable, the Director of National Intelligence. According to DHS guidance, the appropriate vehicle for establishing enterprise-wide intelligence priorities is the Homeland Security Intelligence Priorities Framework. We met with the I&A Research Director and members of his staff, and the Director of the Information Sharing and Enterprise Management Division and his staff who are responsible for developing and implementing the management coordination mechanisms for the enterprise. We also met with representatives from the intelligence programs of USCIS, the U.S. Coast Guard, CBP, ICE, and TSA to discuss the coordination mechanisms established by DHS. Additionally, to address in the first objective whether overlap or duplication exists in the analytic activities of the DHS Intelligence Enterprise, we analyzed all 21 products written by I&A, CBP, and ICE where these three organizations shared responsibility to answer questions in the 2012 POA in support of DHS’s border security mission. We selected the 2012 POA because it was the most recent available when we began this analysis. We selected this mission area for our analysis because this departmental mission contained contributions from the largest number of DHS intelligence components, and chose I&A, CBP, and ICE products to review because these three entities had the greatest number of key intelligence questions assigned to them within the border security mission area. By reviewing only one mission area, we cannot determine if there was duplication in other areas, but our findings support statements from intelligence officials in both I&A and the operational components that the content of their products differs because of their unique missions and primary customer sets and, therefore the products are not duplicative, even when they cover the same broad topical area. Within the DHS mission area of securing the border, we requested an inventory of finished intelligence products for fiscal year 2012 from I&A, CBP, and ICE—the three entities with the greatest number of key intelligence questions assigned to them in this topic area. Products written in 2012 were chosen because this was the most recent complete available to us at the time of this analysis. According to an analysis of these product inventories, we identified six key intelligence questions in which I&A and either ICE or CBP both generated finished intelligence products. In support of these six key intelligence questions, I&A, ICE, and CBP generated a total of 21 finished intelligence products—15 authored by I&A, 4 by ICE, and 2 by CBP. We analyzed these 21 products to determine the extent to which duplication existed by comparing the topics, sources, and customers of these products. To address the second objective, we met with I&A staff from the Production Management Division to determine how I&A develops, administers, and analyzes feedback surveys included with all I&A finished intelligence products, as well as to discuss the efforts I&A had in place through February 2014 to improve customer satisfaction. We then analyzed the results of the feedback surveys that were included with all fiscal year 2012 I&A finished analytic products to determine what readers of these products thought about the quality and usefulness of them. We also met with representatives from four of the five I&A customer groups: DHS components; the Intelligence Community; state, local, tribal, and territorial governments; and the private sector. While we did not meet with representatives from DHS leadership, we did meet with representatives of the I&A unit that briefs leadership. Specifically, we met with representatives from the intelligence programs of the DHS components for which intelligence analysis is a core function in supporting their missions and operations—USCIS, the U.S. Coast Guard, CBP, ICE, and TSA—to discuss the extent to which these organizations find I&A analysis relevant and useful. We met with staff from the Office of the Director of National Intelligence—the office that administers the overall efforts of the Intelligence Community—to discuss their review of the quality of I&A analytic products, as well as the usefulness of these products to the Intelligence Community. We met with DHS and state and local government agency staff members working at 7 of 78 state and major urban area fusion centers—centers that serve as primary focal points within the state and local environments for the receipt, analysis, gathering, and sharing of threat-related information among federal, state, local, tribal, and territorial homeland security partners. The fusion centers we met with were (1) the Arizona Counter Terrorism Information Center, (2) El Paso (Texas) Fusion Center (no DHS staff were located in the El Paso Fusion Center at the time of our discussion), (3) Montana All Threat Intelligence Center, (4) New York State Intelligence Center, (5) San Diego (California) Law Enforcement Coordination Center, (6) Texas Fusion Center in Austin, and (7) the Washington State Fusion Center. These fusion centers comprised 4 of the 6 centers we considered southwest border fusion centers and 3 of the 11 northern border fusion centers. We selected each center to provide access to a range of state-level users of I&A analytic products with an emphasis on border issues because of our examination of potential overlap or duplication related to DHS components’ intelligence analysis reporting on border issues. While information we obtained at these locations cannot be generalized across all fusion centers because we selected these locations based on the diversity of participation in the POA and geographic locations along the southwest and northern border, the units we visited provided us with an overview of users’ perspectives on the usefulness and relevance of I&A analytic products. We also met with representatives of the Partnership for Critical Infrastructure Security (PCIS), the National Council of Information Sharing and Analysis Centers (ISAC), and individual representatives from the Transportation and Commercial Facilities critical infrastructure sectors to determine how the private sector views the analytic services provided by I&A. We compared I&A’s efforts against criteria in the Standards for Internal Control in the Federal Government that note the importance for management to communicate to, and obtain information from, both internal and external stakeholders that may have a significant impact on the organization’s ability to achieve its goals and utilize resources effectively and efficiently. For objective two, we also reviewed the standards by which I&A’s products are judged. To do so we reviewed I&A Policy Directive 900.2- Producing Finished Intelligence (July 15, 2011), I&A Policy Directive 900.4-I&A Joint Production with State and Major Urban Area Fusion Centers (August 31, 2011), I&A Policy Instruction IA-901 DHS Intelligence and Analysis Review and Clearance of Analytic Products Disseminated Outside the Federal Government, I&A memos including Fiscal Year 2013 and Fiscal Year 2014 DHS Intelligence Enterprise Programmatic Guidance; and ODNI Intelligence Community Directive 203, Analytic Standards (June 21, 2007). To obtain a general view of the quality of these products, we reviewed the 2009, 2010, 2011, and 2012 DHS Report to Congress on Voluntary Feedback from State, Local, Tribal, and Private Sector Consumers; I&A’s fiscal year 2011 Customer Feedback Report and the September 2012 Production Performance Report that includes monthly production and feedback information. For the third objective, we reviewed the U.S. Office of Personnel Management Position Classification Standard Flysheet for Intelligence Series GS-0132 June 1960, the classification standard in effect as of December 2013, and the I&A Human Capital Standard Operating Procedures to obtain basic information about personnel standards and operations at I&A. To determine the challenges I&A experienced maintaining a skilled workforce, we reviewed correspondence between I&A and the Office of the Director of National Intelligence related to obtaining excepted service hiring (November 2012-June 2013) and I&A- reported data on personnel attrition. To obtain information on intelligence analyst hiring needs and practices, we reviewed the I&A memo on career ladder decision and hiring standards (November 2012), the DHS Competency Assessment Pilot frequently asked questions and presentation, and the I&A Recruitment Strategy (January 2010). To determine I&A training and development practices and needs we reviewed the Fiscal Year 2013 Intelligence Training Academy Catalog, DHS Intelligence Learning Roadmap for Analysts (September 2012), and the Analysis 101/BITAC Study Report comparing two alternative introductory courses for intelligence analysts prepared for I&A by Booz Allen and SAIC (November 2008). We also reviewed the DHS Human Resources, Performance Management Program (December 2008), and I&A performance competencies to determine I&A’s performance management practices. We then compared I&A’s work force planning activities with those enumerated in DHS’s Workforce Planning Guide (November 2012), the DHS document that outlines the process for workforce planning at DHS. We also met with representatives from I&A’s human capital office, training branch, mission support office, and the acting Chief of Staff to discuss human capital recruitment, hiring, training, and evaluation practices and procedures. We conducted this performance audit from August 2012 to May 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix IV provides additional information about our analysis of fragmentation, overlap, and duplication in DHS components’ intelligence analysis activities. The intelligence analysis activities within DHS are inherently fragmented, as multiple DHS components conduct intelligence analysis in support of the same broad intelligence topics. However, on the basis of our case study in the area of border security, we did not find evidence of unnecessary overlap or duplication among the activities. DHS intelligence entities said that implementing the mechanisms to date has helped them to minimize such overlap and duplication. The intelligence analysis activities of the enterprise are inherently fragmented, as multiple DHS components conduct intelligence analysis in support of shared departmental goals. As articulated in the fiscal year 2012 DHS Intelligence Enterprise Program of Analysis, three to five DHS intelligence components planned to provide intelligence analysis products in support of each departmental goal in fiscal year 2012. For example, as shown in table 1, five components planned to support the departmental goal of Securing and Managing Our Borders through their fiscal year 2012 intelligence analysis activities. Although this creates fragmentation, each entity is providing distinct perspectives because of its unique missions and primary customer sets. As shown in table 2, I&A focuses on threats to the homeland in general, whereas the operational components—such as CBP, ICE, and TSA— each focus more narrowly on supporting their unique operational missions through intelligence analysis. Further, DHS intelligence officials report that the primary customers of I&A are DHS leadership and state and local partners, whereas the primary customers of the operational components are generally internal component customers. Despite distinct missions and primary customer sets, some overlap does exist in the secondary customers that the six DHS intelligence entities serve. For example, as shown in table 2, all six entities reported serving the Intelligence Community, and three entities reported serving state and local partners as part of their secondary customer sets. In some cases, it may be appropriate for multiple entities to be involved in the same programmatic or policy area because of the nature or magnitude of the federal efforts. For example, having multiple DHS entities contributing intelligence analysis to shared departmental goals and serving overlapping customers can be beneficial, as it provides a mechanism to share and validate information and to facilitate complementary analysis. However, fragmentation and overlap may create inefficiencies if customers are burdened with too much information or if entities are inefficiently using resources to develop duplicate products. We did not find evidence to suggest that DHS customers were being burdened with too much information or that DHS resources were being used to develop products that unnecessarily overlapped or were duplicative. Specifically, senior intelligence officials from all seven fusion centers we contacted said that they had not experienced any instances of unnecessary overlap or duplication in the border security–related intelligence products they received from various DHS components. Further, we did not find any duplicative products among those we reviewed in the area of border security that were generated by different components. We did identify one instance of overlap where two DHS components provided similar information to similar customers in different products, but the information was packaged differently such that the products were not duplicative. Specifically, we identified one instance where I&A and ICE each wrote an intelligence product on a similar topic—human smuggling—and for similar customers—DHS leadership and the Intelligence Community. Both products discussed the relationship between permissive visa policies and the potential for exploitation by human-smuggling networks. Further, both products specifically identified one particular country as susceptible to exploitation from human smugglers due at least in part because of its permissive visa policies. However, we determined that the overlapping information was presented in a different context in each product. Specifically, the I&A product provided a summary graphic depicting the visa permissiveness level of each country within a particular region of the world. This provided its customers with a snapshot of countries with permissive visa policies and a warning that these countries may be more susceptible to exploitation from human smugglers than others. This graphic was presented as part of I&A’s Border Security Monitor—a quarterly compilation of relevant border security intelligence analysis from throughout the enterprise. Alternatively, the ICE product was a broader assessment of the factors that make certain locations vulnerable to exploitation from human smuggling networks. Accordingly, the ICE product provided an analysis of the reasons that the specifically identified country may be vulnerable to human smuggling, identifying its visa permissiveness policy as just one contributing factor. While our results cannot be generalized beyond the border security mission area, our findings support statements from intelligence officials in both I&A and the operational components that the content of their products differs because of their unique missions and primary customer sets and, therefore, are not duplicative, even when the products cover the same broad topical area. Eileen R. Larence, (202) 512-8777 or larencee@gao.gov. In addition to the contact named above, Eric Erdman (Assistant Director), Jonathan Bachman, Katherine Davis, Michele Fejfar, Michael Harmond, Eric Hauswirth, Katherine Lee, Thomas Lombardi, Bintou Njie, and Anthony Pordes made significant contributions to this report. | DHS plays a vital role in securing the nation, and its intelligence analysis capabilities are a key part of this effort. Within DHS, I&A has a lead role for intelligence analysis, but other operational components also perform their own analysis activities. GAO was asked to review the management of departmental analysis efforts. This report addresses the extent to which (1) DHS intelligence analysis activities are integrated to support departmental intelligence priorities, (2) I&A customers find analytic products and services useful, and (3) I&A has addressed challenges in maintaining a skilled analytic workforce. GAO examined mechanisms DHS used to coordinate analysis across components, I&A reports and feedback surveys, and human capital plans. GAO also interviewed officials from I&A, the five DHS components with intelligence analysis as a core function, the Office of the Director of National Intelligence who represent the Intelligence Community, 7 of 78 fusion centers (focal points within states that analyze and share information), and the private sector. The fusion center and sector interviews, chosen based on geographic location and other factors, are not generalizable, but provided insight on progress. The Department of Homeland Security (DHS) has established mechanisms—including an intelligence framework and an analytic planning process—to better integrate analysis activities throughout the department, but the mechanisms are not functioning as intended. For example, the framework does not establish strategic departmental intelligence priorities that can be used to inform annual planning decisions, such as what analytic activities to pursue and the level of investment to make, as called for in DHS guidance. According to officials from DHS's Office of Intelligence and Analysis (I&A), it can be challenging for DHS components to focus on developing both strategic priorities and more tactical priorities that support their specific operations. Absent strategic priorities, DHS used component subject matter experts and other information to develop key questions of common interest they would address through analysis. As a result, DHS does not have reasonable assurance that component analytic activities and resource investments are aligned to support departmental priorities. The mechanisms to integrate analysis, however, gave components insight into one another's work and helped them avoid unnecessary overlap and duplication. I&A customers had mixed views on the extent to which its analytic products and services are useful. GAO's interviews with representatives of I&A's five customer groups indicate that two groups—DHS leadership and state, local, tribal, and territorial partners—found products to be useful, while three groups—DHS components, the Intelligence Community, and the private sector—generally did not. Representatives of four of the five groups said that they found other types of services, such as briefings, to be useful. Results from surveys that are attached to I&A products indicate that most customers were very satisfied with the products' usefulness, but the results are not generalizable because they reflect only the views of customers who chose to respond. To address this issue, I&A is conducting more comprehensive surveys and interviews with customers to evaluate the products and services that best meet their needs. I&A expects to complete this effort by the end of June 2014. I&A has taken steps to address challenges it faced in maintaining a skilled workforce, but has not assessed whether its efforts are resolving the challenges. For example: I&A faced challenges in recruiting and hiring analysts, in part because of its hiring authority, which put it at a disadvantage compared with other agencies that were able to process hiring actions more quickly. I&A's hiring authority was changed in 2013, a fact that could help ease these challenges. I&A experienced low morale and high rates of attrition, particularly among its lower-level analysts. To help address these issues, I&A restructured its grade levels in 2012 to provide additional career advancement opportunities. However, I&A has not established mechanisms to evaluate its efforts and use the results to make any needed changes because I&A leadership has focused on other priorities. Such mechanisms will help I&A evaluate if efforts are achieving their intended results of improving recruiting and hiring, bolstering morale, and reducing attrition. In addition, using the evaluation results to determine any needed changes will help ensure that I&A is making sound workforce decisions. GAO recommends, among other things, that DHS (1) establish strategic intelligence priorities and use them to inform analytic activities and (2) establish mechanisms to evaluate workforce initiatives and use results to determine any needed changes. DHS concurred with our recommendations. |
DOD defines a UAV as a powered aerial vehicle that does not carry a human operator; can be land-, air-, or ship-launched; uses aerodynamic forces to provide lift; can be autonomously or remotely piloted; can be expendable or recoverable; and can carry a lethal or nonlethal payload. Generally, UAVs consist of the aerial vehicle, a flight control station, information and retrieval or processing stations, and sometimes wheeled land vehicles that carry launch and recovery platforms. UAVs have been used in a variety of forms and for a variety of missions for many years. After the Soviet Union shot down a U-2 spy plane in 1960, certain UAVs were developed to monitor Soviet and Chinese nuclear testing. Israel used UAVs to locate Syrian radars and was able to destroy the Syrian air defense system in Lebanon in 1982. The United States has used UAVs in the Persian Gulf War, Bosnia, Operation Enduring Freedom, and Operation Iraqi Freedom for intelligence, surveillance, and reconnaissance missions and to attack a vehicle carrying suspected terrorists in Yemen in 2002. The United States is also considering using UAVs to assist with border security for homeland security or homeland defense. Battlefield commanders’ need for real time intelligence has been a key reason for the renewed interest in UAVs. According to the Congressional Research Service, UAVs are relatively lightweight and often difficult to detect. Additional advantages include longer operational presence, greater operations and/or procurement cost-effectiveness, and no risk of loss of life of U.S. service members. DOD operates three UAV types—small, tactical, and medium altitude endurance—in its force structure. The Air Force has operated the MQ-1 Predator since 1996 in intelligence, surveillance, and reconnaissance missions, using a variety of sensors and satellite data links to relay information, and in an offensive combat role using Hellfire missiles. The Air Force also operates a small UAV called Desert Hawk, a 5-pound aerial surveillance system used by security personnel to improve situational awareness for force protection. The Army, Navy, and Marine Corps have at various times operated the RQ-2 Pioneer since 1986. Only operated by the Marine Corps today, the Pioneer provides targeting, intelligence, and surveillance. The Marine Corps also operates a small UAV called Dragon Eye for over-the-hill reconnaissance. This small, 4.5-pound UAV is currently in full-rate production. Originally envisioned to be a joint Army/Navy/Marine Corps program, the RQ-5 Hunter was cancelled in 1996 after low-rate initial production. The Army currently operates the residual Hunters for intelligence, surveillance, and reconnaissance. The Army also has selected the RQ-7 Shadow to provide intelligence, surveillance, and reconnaissance at the brigade level, and full-rate production was approved in 2002. Another system, the Raven, a small, 4-pound UAV is being purchased commercially off the shelf by both the Army for regular unit support and the Air Force for special operations. Numerous other UAVs of various sizes remain in development. These include the RQ-4 Global Hawk, a nearly 27,000-pound, jet-powered UAV with a wing span of over 116 feet used for intelligence, surveillance, and reconnaissance over an area of up to 40,000 square nautical miles per day; the RQ-8 Fire Scout, a vertical takeoff and landing UAV weighing nearly 2,700 pounds; and the Neptune, weighing under 100 pounds with a wingspan of 7 feet and optimized for sea-based operations. In addition, congressional action in recent years has been directed toward promoting an increase in the number and type of missions on which UAVs can be used. For example, section 220 of the Department of Defense Authorization Act for Fiscal Year 2001 specifies that it shall be a goal of the armed forces that one-third of the aircraft in the operational deep strike aircraft fleet be unmanned by 2010. Moreover, in section 1034 of the National Defense Authorization Act for fiscal year 2004, Congress mandated a DOD report of the potential for UAVs to be used for a variety of homeland security and counter drug missions. Finally, the fiscal year 2004 Defense Appropriations Conference Report directs that DOD prepare a second report by April 2004 detailing UAV requirements that are common to each of the uniformed services. Most of our prior work has focused on the development, testing, and evaluation of unmanned aerial vehicles. As recently as September 2000, we reported that DOD was deciding to procure certain UAV systems before adequate testing had been completed. We found that buying systems before successfully completing their testing had led repeatedly to defective systems that were later terminated or required costly retrofits or redesigns to achieve satisfactory performance. Conversely, when DOD focused UAV acquisition on mature technologies that proved the military utility of a given vehicle, the department had an informed knowledge base upon which to base a decision. For example, even though the Predator UAV was based on the existing Gnat 750 UAV, the department required Predator’s performance to be validated. As a result, Predator moved quickly to full-rate production and, at the time of our current review, had performed a variety of operational missions successfully. Through our prior work, we have also periodically raised the question of the potential for duplication of efforts among the services and the effectiveness of overarching strategy documents and management approaches to avoid duplication and other problems. For example, in June 2003 we reported that the Air Force and Navy, which previously were independently developing unmanned combat aerial vehicles, had agreed to jointly develop a new system for offensive combat missions that met both of their needs. However, we also pointed out that while one program is more efficient than two, the participation of two services would increase the challenges of sustaining funding and managing requirements. Similarly, as early as 1988, we raised concerns about a variety of management challenges related to UAV development. At that time, various congressional committees had expressed concern about duplication in the services’ UAV programs and stressed the need to acquire UAVs that could meet the requirements of more than one service, as the Air Force and Navy have recently agreed to try. In response to congressional direction, DOD developed a UAV master plan, which we reviewed at that time. We identified a number of weaknesses in the 1988 master plan, including that it (1) did not eliminate duplication, (2) continued to permit the proliferation of single-service programs, (3) did not adequately consider cost savings potential from manned and unmanned aircraft trade-offs, and (4) did not adequately emphasize the importance of common payloads among different UAV platforms. DOD generally concurred with that report and noted that it would take until 1990 to reconcile service requirements for acquiring a common family of UAVs. Since our 1988 report, the overall management of defense UAV programs has gone full circle. In 1989 the DOD Director of Defense Research and Engineering set up the UAV Joint Project Office as a single DOD organization with management responsibility for UAV programs. With the Navy as the Executive Agency, within 4 years the Joint Project Office came under criticism for a lack of progress. Replacing the office in 1993, the Defense Airborne Reconnaissance Office was created as the primary management oversight and coordination office for all departmentwide manned and unmanned reconnaissance. In 1998, however, this office also came under criticism for its management approach and slow progress in fielding UAVs. In that same year, this office was dissolved and UAV program development and acquisition management were given to the services, while the Assistant Secretary of Defense for Command, Control, Communications and Intelligence was assigned to provide oversight for the Secretary of Defense. Overall, Congress has provided funding for UAV development and procurement that exceeds the amounts requested by DOD during the past 5 fiscal years, and the services to date have obligated about 99 percent of these funds. From fiscal year 1999 through fiscal year 2003, DOD requested approximately $2.3 billion, and Congress, in its efforts to encourage rapid employment of UAVs by the military services, has appropriated nearly $2.7 billion to develop and acquire UAVs. In total, the services have obligated $2.6 billion of the appropriated funds. (See table 1.) Generally, the additional funding provided by Congress was targeted for specific programs and purposes, enabling the services to acquire systems at a greater rate than originally planned. For example, in fiscal year 2003 the Air Force requested $23 million to acquire 7 Predators, but Congress provided over $131 million—an increase of approximately 470 percent— enough to acquire 29 Predators to meet operational demands in the war against terrorism. The Air Force has obligated 71 percent of the Predator 2003 funding during its first program year. About $1.8 billion (67 percent) of the money appropriated during the fiscal year 1999-2003 period went for research, development, test and evaluation of the various models, as shown in table 2. The programs were generally divided into efforts to develop tactical UAVs and medium-to-high-altitude endurance UAVs and, until 2002 when the Predator was armed, were focused on meeting surveillance and reconnaissance needs. Only three systems—the Army’s Shadow and the Air Force’s Predator and Global Hawk—have matured to the point where they required procurement funding during fiscal years 1999 through 2003. By fiscal year 2003, appropriations totaled nearly $880 million, as shown in table 3. DOD estimates that an additional $938 million in procurement funding will be needed through fiscal year 2005. DOD’s planning for developing and fielding UAVs does not provide reasonable assurance that UAVs will be integrated into the force structure efficiently, although the department has taken certain positive steps to improve its management of the UAV program. Specifically, DOD created a joint UAV Planning Task Force and developed a key planning document, the UAV Roadmap 2002-2027. However, neither the Joint Task Force nor the Roadmap is sufficient to provide DOD with reasonable assurance that it is efficiently integrating UAVs into the force structure. Consequently, the individual services are developing their own UAVs without departmentwide guidance, thus increasing the risk of unnecessarily duplicating capabilities and leading to potentially higher costs and greater interoperability challenges. Since 2000 DOD has taken positive steps to improve the management of the UAV program. In October 2001 the Under Secretary of Defense for Acquisition, Technology, and Logistics created the joint UAV Planning Task Force to function as the joint advocate for developing and fielding UAVs. The Task Force is the focal point to coordinate UAV efforts throughout DOD, helping to create a common vision for future UAV- related activities and to establish interoperability standards. For example, the Task Force is charged with developing and coordinating detailed UAV development plans, recommending priorities for development and procurement efforts, and providing the services and defense agencies with implementing guidance for common UAV programs. Moreover, the development of the 2002 Roadmap has been the Task Force’s primary product to communicate its vision and promote UAV interoperability. The Roadmap is designed to guide U.S. military planning for UAV development from 2002 to 2027 and describes current programs, identifies potential missions for UAVs, and provides guidance on developing emerging technologies. The Roadmap is also intended to assist DOD decision makers in building a long-range strategy for UAV development and acquisition to support defense plans contained in such future planning efforts as the Quadrennial Defense Review. While the creation of the joint Task Force and the UAV Roadmap are important steps to improve management of the UAV program, they are not enough to provide reasonable assurance that DOD is developing and fielding UAVs efficiently. The UAV Roadmap does not constitute a comprehensive strategic plan for developing and integrating UAVs into force structure. Moreover, the Joint Task Force’s authority is generally limited to program review and advice but is insufficient to enforce program direction. While DOD has some elements of a UAV strategic-planning approach in place, it has not established a comprehensive strategic plan or set of plans for developing and fielding UAVs across DOD. The Government Performance and Results Act of 1993 provides a framework for establishing strategic-planning and performance measurement in the federal government, and for ensuring that federal programs with the same or similar goals are closely coordinated and mutually reinforcing. The strategic planning requirement of this framework consists of six key components, described in table 4. When applied collectively and combined with effective leadership, the components can provide a management framework to guide major programs, efforts, and activities, including the development and integration of UAVs into the force structure. However, neither the UAV Roadmap nor other DOD guidance documents represent a comprehensive strategy to guide the development and fielding of UAVs that complement each other, perform the range of priority missions needed, and avoid duplication. DOD officials acknowledged that the Office of the Secretary of Defense has not issued any guidance that establishes an overall strategy for UAVs in DOD. While high-level DOD strategic-planning documents provide some general encouragement to pursue transformational technologies, including the development of UAVs, these documents do not provide any specific guidance on developing and integrating UAVs into the force structure. Nonetheless, the Roadmap represents a start on a strategic plan because it incorporates some of the key components of strategic planning provided by the Results Act framework as shown by the following: Long Term Goals—The Roadmap states its overall purpose and what it hopes to encourage the services to attain. The Roadmap refers to the Defense Planning Guidance’s intent for UAVs as a capability and indicates that the guidance encourages the rapid advancement of this capability. At the same time, it does not clearly state DOD’s overall or long-term goals for its UAV efforts. Similarly, while it states that it wants to define clear direction to the services, it does not clearly identify DOD’s vision for its UAV force structure from 2002 through 2027. Approaches to Obtain Long-Term Goals—The Roadmap’s Approach section provides a strategy for developing the Roadmap and meeting its goal. This approach primarily deals with identifying requirements and linking them to needed UAV payload capabilities, such as sensors and associated communication links. The approach then ties these requirements to forecasted trends in developing technologies as a means to try to develop a realistic assessment of the state of the technology in the future and the extent to which this technology will be sufficient to meet identified requirements. At the same time, however, the Roadmap does not provide a clear description of a strategy for defining how to develop and integrate UAVs into the future force structure. For example, the Roadmap does not attempt to establish UAV development or fielding priorities nor does it identify the most urgent mission-capability requirements. Moreover, without the sufficient identification of priorities, the Roadmap cannot link these priorities to current or developing UAV programs and technology. Beyond strategic planning, the Results Act calls for agencies to establish results-oriented performance measures and to collect performance data to monitor progress. The Roadmap addresses, in part, key elements of performance measurement, as shown in the following: Performance Goals—The Roadmap established 49 specific performance goals to accomplish a variety of tasks. Some of these goals are aimed at fielding transformational capabilities without specifying what missions will be supported by the new capabilities. Others are to establish joint standards and control costs. Nonetheless, of the 49 goals, only 1 deals directly with developing and fielding a specific category of UAV platform to meet a priority mission-capability requirement—suppression of enemy air defenses or strike electronic attack. The remaining goals, such as developing heavy fuel aviation engines suitable for UAVs, are predominantly associated with developing UAV or related technologies, and UAV-related standards and policies to promote more efficient and effective joint UAV operations. Thus, the Roadmap does not establish overall UAV program goals. Performance Indicators—Some of the 49 performance goals have performance indicators that could be used to evaluate progress, such as the reliability goal for decreasing the annual mishap rate for large UAVs. However, many other goals have no established indicators, such as developing standards to maximize UAV interoperability. Furthermore, the Roadmap does not establish indicators that readily assess how well the program will meet the priority mission capabilities needed by the services and theater commanders. While the Roadmap has incorporated some key strategic-planning components, it only minimally addresses the other key components. According to officials in the Office of the Secretary of Defense, the UAV Roadmap was not intended to provide an overarching architecture for UAVs departmentwide. It does, however, provide some significant guidance for developing UAV and related technologies. In addition to the 49 separate goals, the Roadmap also provides a condensed description of DOD’s current UAVs, categorizing them as operational, developmental, and other (residual and conceptual) UAV systems. The Roadmap further sought to identify current and emerging requirements for military capabilities that UAVs could address. In addition to the Roadmap, the Joint Requirements Oversight Council has reviewed several UAVs and issued guidance for some systems, such as the Army’s Shadow and the Air Force’s Predator. According to Joint Staff officials, however, neither the Joint Staff nor the council has issued any guidance that would establish a strategic plan or overarching architecture for DOD’s current and future UAVs. In addition, in June 2003 the Chairman of the Joint Chiefs of Staff created the Joint Capabilities Integration and Development System to provide a top-down capability-based process. Under the system, five Functional Capabilities Boards have been chartered, each representing a major warfighting capability area as follows: (1) command and control, (2) force application, (3) battle space awareness, (4) force protection, and (5) focused logistics. Each board has representatives from the services, the Combatant Commanders, and certain major functions of the Under Secretary of Defense. Each board is tasked with developing a list of capabilities needed to conduct joint operations in its respective functional area. Transformation of these capabilities is expected, and the boards are likely to identify specific capabilities that can be met by UAVs. Nonetheless, according to Joint Staff officials, these initiatives will also not result in an overarching architecture for UAVs. However, the identification of capabilities that can be met by UAVs is expected to help enhance the understanding of DOD’s overall requirement for UAV capabilities. As a joint advocate for UAV efforts, the joint UAV Planning Task Force’s authority is limited to program review and advice. The Task Force Director testified in March 2003 that the Task Force does not have program directive authority, but provides the Under Secretary of Defense for Acquisition, Technology, and Logistics with advice and recommended actions. Without such authority, according to the Director, the Task Force seeks to influence services’ programs by making recommendations to them or proposing recommended program changes for consideration by the Under Secretary. Nonetheless, according to DOD officials, the Task Force has attempted to influence the joint direction of service UAV efforts in a variety of ways, such as reviewing services’ budget proposals, conducting periodic program reviews, and participating in various UAV- related task teams. For example, the Task Force has encouraged the Navy to initially consider an existing UAV rather than develop a unique UAV for its Broad Area Marine Surveillance mission. The Task Force has also worked with the Army’s tactical UAV program, encouraging it to consider using the Navy’s Fire Scout as an initial platform for the Future Combat Systems class IV UAV. The Task Force also regularly reviews services’ UAV program budgets and, when deemed necessary, makes budget change proposals. For example, the Task Force, in conjunction with other Secretary of Defense offices, was successful in maintaining the Air Force’s Unmanned Combat Aerial Vehicle program last year when the Air Force attempted to terminate it. The Task Force was also successful in overturning an attempt by the Navy to terminate the Fire Scout rotary wing UAV program. However, the Task Force cannot compel the services to adopt any of its suggestions. For example, according to the Director, no significant progress has been made in achieving better interoperability among the Services in UAV platform and sensor coordination, but work continues with the services, intelligence agencies, Department of Homeland Security, and U.S. Joint Forces Command to this end. As they pursue separate UAV programs, the services and DOD agencies risk developing UAVs with duplicate capabilities, potentially leading to greater costs and increased interoperability challenges. The House Appropriation Committee, in a 2003 report, expressed concern that without comprehensive planning and review, there is no clear path toward developing a UAV force structure. Thus, the committee directed that each service provide an updated UAV roadmap. These reports were to address the services’ plans for the development of UAVs and how current UAVs are being employed. Officials from each of the services indicated that their UAV roadmap was developed to primarily address their individual service’s requirements and operational concepts. However, in their views, high-level DOD guidance—such as the Joint Vision 2020, National Military Strategy, and Defense Planning Guidance—did not constitute strategic plans for UAVs that would guide the development of their individual service’s UAV roadmap. These officials further stated that the Office of the Secretary of Defense’s 2002 UAV Roadmap provided some useful guidance, especially in regard to UAV technology, but was not used to guide their UAV roadmap’s development. Moreover, they did not view the Office of the Secretary of Defense’s Roadmap as a departmentwide strategic plan nor an overarching architecture for integrating UAVs into the force structure. Moreover, according to the service officials developing the service-level UAV roadmaps, there was little collaboration with other services’ UAV efforts. Thus, DOD has little assurance that the current approach to developing and fielding UAVs in the services will result in closely coordinated or mutually reinforcing program efforts, as recommended by the Results Act. While the Office of the Secretary of Defense and the Joint Chiefs of Staff have tried to coordinate these efforts through the Joint UAV Planning Task Force, the absence of a guiding strategy and sufficient authority has made it difficult to have reasonable assurance that development and fielding are being done efficiently. If not managed effectively, this process can potentially lead to the development and fielding of UAVs across DOD and the services, which may unnecessarily duplicate each other. For example, the Army, Marine Corps, and Air Force are individually developing small, backpackable, lightweight UAVs for over-the-horizon and force protection reconnaissance missions. Likewise, both the Marine Corps and Army are individually pursuing various medium-sized tactical UAVs with both fixed and rotary wings to accomplish a variety of missions, including tactical reconnaissance, targeting, communications relay, and force protection. Without a strategic plan and an oversight body with sufficient program directive authority to implement the plan, DOD has little assurance that its investment will result in UAV programs being effectively integrated into the force structure. Consequently, DOD risks poorly integrating UAVs into the force structure, which could increase development, procurement, and logistics costs; increase the risk of future interoperability problems; and unnecessarily duplicate efforts from one service to the next. To enhance management control over the UAV program, we recommend that the Secretary of Defense take the following two actions: establish a strategic plan or set of plans that are based on mission requirements to guide UAV development and fielding by modifying the Roadmap or developing another document or documents and, at a minimum, ensure that the plan links operational requirements with development plans to ensure that the services develop systems that complement each other, will perform the range of missions needed, and avoid duplication and designate the UAV Task Force or another appropriate organization to oversee the implementation of a UAV strategic plan; provide this organization with sufficient authority to enforce the plan’s direction, and promote joint operations and the efficient expenditure of funds. In written comments on a draft of this report, DOD partially concurred with our first recommendation and disagreed with the second. DOD partially concurred with our recommendation that the Secretary of Defense establish a strategic plan or set of plans to guide the development and fielding of UAVs by modifying the Roadmap or developing another appropriate document. DOD stated that its preferred way to address UAV planning was through the Joint Capabilities Integration and Development System, which is a capability-based planning process at the Joint Staff level that will identify UAV capabilities as needed across the five major joint warfighting areas through the use of the Functional Capabilities Boards. We continue to believe that DOD needs a departmentwide strategic plan establishing the mission capabilities required of UAVs and the detailed strategy for effectively developing and acquiring these capabilities. DOD acknowledged that its UAV Roadmap is not a broad strategic plan. Moreover, as we pointed out in our report, DOD recognized in its UAV Roadmap the need for a focused strategic plan for UAV capabilities, stating that the Roadmap was “to assist Department of Defense decision makers in developing a long-range strategy for UAV development and acquisition in future Quadrennial Defense Reviews and other planning efforts”—a strategy that has yet to be created. Such a strategic plan would provide the Office of the Secretary of Defense, the joint UAV Planning Task Force, or other appropriate authorities with the additional leverage and guidance to ensure effective oversight of the services’ development and integration of UAV capabilities into the joint warfighting force structure. The Joint Capabilities Integration and Development System process, which DOD referred to, may be a useful tool for DOD to implement its capabilities-based planning approach. However, we continue to believe that a strategic plan for UAVs would be an important element in assuring UAV decisions and development reflect decisions made within the Joint Capabilities Integration and Development System process and are consistent with the strategic plan’s intent. DOD did not concur with our recommendation to designate the UAV Planning Task Force or another appropriate organization to oversee the implementation of a UAV strategic plan and provide this organization with sufficient authority to enforce the plan’s direction. In its response, DOD indicated that the Secretary of Defense already has the authority needed to accomplish the intent of our recommendation. To buttress its point, DOD identified four actions taken to influence service development, evaluation, acquisition, and fielding of certain UAVs. We acknowledge in our report that the formation of the Task Force represents a step in the right direction for DOD and that the Task Force has achieved some successes in coordinating some UAV programs. In our recent report on the Unmanned Combat Aerial Vehicle, in fact, we gave the Task Force credit for bringing the Air Force and Navy programs together into a joint program. However, the Task Force has not always been successful. For example, no significant progress has been made in achieving better interoperability among Service UAVs and sensors. Our concern is that with UAVs assuming ever-greater importance as key enabling technologies, and with increasing sums of money being allocated for a growing number of UAV programs, DOD needs more than a coordination mechanism. It needs an organization with authority to achieve the most cost-effective development of UAVs. Consequently, we continue to believe that the recommendation is sound, and that to effectively implement a strategic plan for UAVs, the Secretary needs to designate an appropriate office with the authority to oversee and implement the strategy. DOD’s comments are included in their entirety in appendix II. DOD provided technical comments, which we included in our report as appropriate. Unless you publicly announce its contents earlier, we plan no further distribution of this report until 14 days from its issue date. At that time,we will send copies of this report to other appropriate congressional committees; the Secretary of Defense; and the Director, Office of Management and Budget, and it will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4914. Key contributors to this report are listed in appendix III. To determine the extent to which the Department of Defense (DOD) requested, received, and used funds for major unmanned aerial vehicle (UAV) development efforts during fiscal years 1999-2003, we reviewed department and service documentation for major operational UAV programs, programs that are in procurement, and programs that are under development and to be procured by 2010. Funding data were obtained from various sources. We obtained the funding levels that DOD requested for UAV programs from the justification books used to support DOD’s budget requests and the DOD Comptroller’s Congressional Funding tracking database. We also obtained the funding levels appropriated to service UAV programs by analyzing the services’ Appropriation Status by Fiscal Year Program and Subaccounts reports. Additionally, we analyzed these reports to determine the extent to which these appropriated funds were obligated within their allowed program years. We did not conduct a comprehensive audit to reconcile the differences in appropriated and obligated funds. To assess whether DOD’s approach to developing and employing UAVs ensures that UAVs will be efficiently integrated into the force structure, we reviewed key departmentwide strategic documents, such as the Defense Planning Guidance, to identify the level of DOD’s strategic planning for UAVs and its impact on service planning. We discussed the level of strategic planning for UAVs with key DOD and service officials from organizations with key roles in DOD’s g development, such as the Office of the Secretary of Defense’s Joint UAV Planning Task Force; the Office of the Assistant Secretary of Defense for Command, Control, Communications and Intelligence; the Joint Requirements Oversight Council; and U.S. Joint Forces Command. We reviewed each service’s current UAV roadmap and held discussions with officials from service activities involved in planning and developing their UAV force structure roadmaps. We also reviewed in detail the Office of the Secretary of Defense’s Unmanned Aerial Vehicles Roadmap 2002-2027, and assessed the extent to which it establishes an overall DOD management framework for developing and employing UAVs departmentwide. We used the principles embodied in the Government Performance and Results Act of 1993 as criteria for assessing the UAV Roadmap. We performed our work from June 2003 to February 2004 in accordance with generally accepted government auditing standards. In addition to the person named above, Fred Harrison, Lawrence E. Dixon, James Mahaffey, James Driggins, R.K. Wild, and Kenneth Patton also made major contributions to this report. Nonproliferation: Improvements Needed for Controls on Exports of Cruise Missile and Unmanned Aerial Vehicles. GAO-04-493T. Washington, D.C.: March 9, 2004. Nonproliferation: Improvements Needed to Better Control Technology Exports for Cruise Missiles and Unmanned Aerial Vehicles. GAO-04-175. Washington, D.C.: January 23, 2004. Defense Acquisitions: Matching Resources with Requirements Is Key to the Unmanned Combat Air Vehicle Program’s Success. GAO-03-598. Washington, D.C.: June 30, 2003. Unmanned Aerial Vehicles: Questionable Basis for Revisions to Shadow 200 Acquisition Strategy. GAO/NSIAD-00-204. Washington, D.C.: September 26, 2000. Unmanned Aerial Vehicles: Progress of the Global Hawk Advanced Concept Technology Demonstration. GAO/NSIAD-00-78. Washington, D.C.: April 25, 2000. Unmanned Aerial Vehicles: DOD’s Demonstration Approach Has Improved Project Outcomes. GAO/NSIAD-99-33. Washington, D.C.: August 30, 1999. Unmanned Aerial Vehicles: Progress toward Meeting High Altitude Endurance Aircraft Price Goals. GAO/NSIAD-99-29. Washington, D.C.: December 15, 1998. Unmanned Aerial Vehicles: Outrider Demonstrations Will Be Inadequate to Justify Further Production. GAO/NSIAD-97-153. Washington, D.C.: September 23, 1997. Unmanned Aerial Vehicles: DOD’s Acquisition Efforts. GAO/T-NSIAD— 97-138. Washington, D.C.: April 9, 1997. Unmanned Aerial Vehicles: Hunter System Is Not Appropriate for Navy Fleet Use. GAO/NSIAD-96-2. Washington, D.C.: December 1, 1995. Unmanned Aerial Vehicles: Performance of Short Range System Still in Question. GAO/NSIAD-94-65. Washington, D.C.: December 15, 1993. Unmanned Aerial Vehicles: More Testing Needed Before Production of Short Range System. GAO/NSIAD-92-311. Washington, D.C.: September 4, 1992. Unmanned Aerial Vehicles: Medium Range System Components Do Not Fit. GAO/NSIAD-91-2. Washington, D.C.: March 25, 1991. Unmanned Aerial Vehicles: Realistic Testing Needed Before Production of Short Range System. GAO/NSIAD-90-234. Washington, D.C.: September 28, 1990. Unmanned Vehicles: Assessment of DOD’s Unmanned Aerial Vehicle Master Plan. GAO/NSIAD-89-41BR. Washington, D.C.: December 9, 1988. | The current generation of unmanned aerial vehicles (UAVs) has been under development for defense applications since the 1980s. UAVs were used in Afghanistan and Iraq in 2002 and 2003 to observe, track, target, and strike enemy forces. These successes have heightened interest in UAVs within the Department of Defense (DOD) and the services. GAO was asked to (1) determine how much funding DOD requested, was appropriated, and was obligated for major UAV development efforts during fiscal years 1999-2003 and (2) assess whether DOD's approach to planning for UAVs provides reasonable assurance that its investment in UAVs will facilitate their integration into the force structure. During the past 5 fiscal years, Congress provided more funding for UAV development and procurement than requested by DOD, and to date the services have obligated most of these funds. To promote rapid employment of UAVs, Congress has provided nearly $2.7 billion for UAV development and procurement compared with the $2.3 billion requested by DOD. Because Congress has appropriated more funds than requested, the services are able to acquire systems at a greater rate than planned. For example, in fiscal year 2003, the Air Force requested $23 million to buy 7 Predator UAVs, but Congress provided over $131 million--enough to buy 29 Predators. DOD's approach to planning for developing and fielding UAVs does not provide reasonable assurance that its investment in UAVs will facilitate their integration into the force structure efficiently, although DOD has taken positive steps to improve the UAV program's management. In 2001 DOD established a joint Planning Task Force in the Office of the Secretary of Defense. To communicate its vision and promote commonality of UAV systems, in 2002, the Task Force published the UAV Roadmap, which describes current programs, identifies potential missions, and provides guidance on emerging technologies. While the Roadmap identifies guidance and priority goals for UAV development, neither it nor other key documents represent a comprehensive strategic plan to ensure that the services and DOD agencies develop systems that complement each other, perform all required missions, and avoid duplication. Moreover, the Task Force serves in an advisory capacity to the Under Secretary of Defense for Acquisition, Technology, and Logistics, but has little authority to enforce program direction. Service officials indicated that their service-specific planning documents were developed to meet their own needs and operational concepts without considering those of other services. Without a strategic plan and an oversight body with sufficient authority to enforce program direction, DOD risks fielding a poorly integrated UAV force structure, which could increase costs and the risk of future interoperability problems. |
Institutional personnel are generally nondeployable military and civilians who support Army infrastructure activities, such as training, doctrine development, base operations, supply, and maintenance. One major exception is in the medical area, since some personnel assigned to institutional positions are expected to deploy in wartime. A significant amount of the Army’s personnel and budget are devoted to institutional functions. For example, the Army’s Program Objective Memorandum for fiscal year 1998 included about 132,000 active Army and about 247,000 civilian institutional positions. These positions represented about 27 percent of the total active Army and 100 percent of Army civilians. Funding for institutional personnel totals about $18 billion per year. Army institutional functions have received increasing scrutiny in recent years because the Army has been unable to (1) support personnel requirements based on workload and (2) ensure that these functions are carried out in the most efficient and cost-effective manner. In addition, the Army continues to rely on its active personnel to perform institutional functions despite shortfalls in operational forces. The Army Audit Agency reported in 1992 and 1994 that the Army did not know its workload and thus could neither justify personnel needs and budgets nor improve productivity and efficiency. Our February 1997 report recommended that the Secretary of the Army report to the Secretary of Defense, as a material weakness under the Federal Managers’ Financial Integrity Act, the Army’s long-standing problem with determining institutional personnel requirements without an analysis of the workload. The Army agreed with the recommendation and prepared a plan, as required by the act, to resolve the weakness. The plan was approved by the Assistant Secretary of the Army for Manpower and Reserve Affairs in October 1997. All corrective actions detailed in this plan are to be completed by December 1999. In January 1995, the Army began an effort to reengineer institutional processes and redesign organizational structures so that the institutional Army would effectively and efficiently develop, generate, deploy, and sustain operational forces. The Army’s reengineering principles include eliminating unnecessary layering of functions and reducing the number of major headquarter commands. The Army stated that savings in active Army institutional positions are to be reinvested in the operational forces. The redesign effort, referred to as Force XXI Institutional Redesign, is to be conducted in three phases, with each phase examining different functions. Phase I was completed in March 1996, and phases II and III are expected to be completed in March 1998 and March 2000, respectively. In May 1997, the Secretary of Defense announced the results of the Quadrennial Defense Review, which included reducing 33,700 civilian Army positions and some active Army positions. According to Army officials, these reductions would be in addition to the 13,000 positions already programmed between fiscal years 1998 and 2003 or those resulting from phase I redesign efforts. In September 1997, the Deputy Secretary of Defense introduced DOD’s strategic plan to implement the Government Performance and Results Act. The plan contains six overall goals, including to “. . .fundamentally reengineer the Department and achieve a 21st Century infrastructure by reducing costs and eliminating unnecessary expenditures while maintaining required military capabilities.” Army plans, such as the Force XXI Institutional Redesign effort, are to be linked to the overall goals in the strategic plan. The Army has made some progress by developing a material weakness plan, but it may have difficulty achieving the plan’s December 1999 completion date for the following three reasons. First, Army commands are not fully implementing the required 12-step methodology, and the Army has acknowledged that staffing levels for oversight reviews to ensure compliance are insufficient. Second, as of October 1997, critical subplans outlining how the Army intends to meet its milestones had not been developed for the costing system and the computer-based methodology. The Army’s progress in implementing the computer-based methodology during its initial pilot test has been slower than the Army has estimated. Last, milestones for critical portions of the plan have slipped from original estimates, even though the plan’s overall completion date has remained the same. Delays in implementing the plan’s corrective actions could result in further reductions of institutional personnel without the benefits of workload analysis and assessments of risks and tradeoffs. Army regulations require that the institutional workforce be based on workload. However, our February 1997 report concluded that the Army cannot identify and prioritize its institutional workload and therefore does not have an analytical basis for assigning institutional personnel or assurance that it has the minimum workforce for accomplishing institutional missions. The material weakness plan acknowledges this problem, stating that “. . .managers at all levels do not have the information needed to improve work performance, improve organizational efficiency, and determine and support staffing needs, manpower budgets, and personnel reductions.” The Army’s plan contains some logical steps to correct this material weakness, including the Army’s two near-term solutions to identifying the number of institutional positions based on an analysis of the workload. These solutions are a computer system for depots and arsenals, called the Army Workload Performance System (AWPS), and the 12-step methodology analysis for major commands. AWPS was developed to integrate workload and workforce information so depot managers can project the workforce needed to accomplish various levels of workload. The 12-step methodology was developed to link personnel to workload, reduce the cost of accomplishing work, and help managers make choices as they balance personnel and workload. The Army plans to integrate the workload and workforce information provided by the 12-step methodology and AWPS with the Civilian Manpower Integrated Costing System. According to Army officials, the Army will not be able to successfully link workload and workforce to the budget without the integration of these three elements. For this reason, our review primarily focused on these initiatives. Although the Army established the 12-step methodology in April 1996 as the standard process for determining institutional requirements, commands’ requirements programs fall short of what the Army expects. (See app. II for a list of Army major commands.) The 12-step method includes analyses of missions and functions, opportunities to improve processes, workload drivers, workforce options (including civilian versus military and contracting versus in-house), and organizational structure. Figure 1 shows the components of the 12-step method. Even though Army commands will continue to have some flexibility in creating their own requirements program, they will be required to perform all of the analyses included in the 12-step method. According to draft Army Regulation 570-4, although specific processes for determining requirements can vary, all processes must be approved by Army headquarters and have a common conceptual framework that consists of the 12-step analyses. Currently, the Army has no formal review process for determining whether major commands are using the 12-step methodology. Our review of requirements programs at three major commands (Army Materiel Command, Training and Doctrine Command, and Forces Command) found that the programs differ substantially in coverage and content and do not include all of the 12-step analyses. The Army Materiel Command’s review did not systematically analyze labor sources (steps 4 and 9), such as examining the potential for contracting out various functions. Also, the Command did not perform efficiency reviews (step 3) because it assumed that the organization had already become efficient as a result of downsizing. Further, the Command did not consider customer satisfaction (step 7) as an element of timeliness and quality of services or examine best practice approaches (step 3). Even though these steps were not performed, the Command reported that it had validated 79,941 personnel of its 80,542 assigned end strength—more than 99 percent. The Training and Doctrine Command’s process examines similar functions across installations to look for best practices and analyzes whether a particular installation is structured efficiently. However, the process does not include a decrement list (step 7), which contains options of how a command may perform its mission by merging, eliminating, or transferring functions if it receives fewer positions than expected. The Forces Command’s process examines functions at each installation. When this examination is completed, Command officials stated that they would compare functions across installations. As of November 1997, the Command had completed reviews at 3 of 11 installations and had not compared similar functions across the installations to perform the best practice analyses required in step 3. According to Command officials, Forces Command plans to examine best practices at the conclusion of its individual installation reviews in September 1998. The material weakness plan establishes procedures for the Office of the Assistant Secretary of the Army for Manpower and Reserve Affairs to review commands’ requirements programs to ensure that they include the 12-step analyses. The Army’s plan to review commands’ compliance with the 12-step method may be delayed if Manpower and Reserve Affairs does not receive sufficient staff to conduct oversight reviews. According to a Manpower and Reserve Affairs official, the office is to (1) certify commands’ requirements programs and their compliance with the 12 steps, (2) conduct quality assurance reviews of commands’ requirements studies, and (3) assist major commands by conducting 12-step reviews on a contract basis. According to the plan, certification reviews are scheduled to start in March 1998, and quality assurance reviews are scheduled to begin in June 1998. To successfully accomplish these tasks within the plan’s time frames, a Manpower and Reserve Affairs official estimated that at least 35 additional staff would be needed. However, the material weakness plan states that only nine staff would be hired. The Assistant Secretary stated that executing the plan would require more resources. The lack of staff could delay both the certification and quality assurance reviews and prevent the Army from realizing the full benefits of this approach. Staff from the Office of the Assistant Secretary of the Army for Manpower and Reserve Affairs cited their review of air traffic control operations as an example of how the Army expects to benefit from proper application of the 12-step methodology. At the beginning of the study, these institutional positions totaled 2,238 at multiple locations. The study used the 12-step method to develop a workforce model and recommended centralizing all tactical personnel in a single battalion stationed at the Army’s Aviation School at Fort Rucker, Alabama. The study concluded that this consolidation could save 226 military positions, which the Army could transfer to meet other requirements, and $5.5 million annually in stationing and operating costs. AWPS is the Army’s second solution for determining workload-based institutional requirements, identifying opportunities to achieve depot efficiencies, and linking workload, personnel, and dollars. AWPS consists of three modules—performance measurement control, workload forecasting, and workforce forecasting—to determine workload-based personnel requirements at the depots, arsenals, and ammunition plants. The performance measurement control module compares actual to planned cost and schedule performance, thereby allowing users to identify problem areas. This module can identify the work centers contributing to the most significant cost and schedule variances. The workload forecasting module stores project data, labor expenditures, performance data, and scheduling information by work center. This module allows managers to compare workload levels to available direct labor and analyze changes in forecasted workload. This comparison can reveal mismatches or overloads before firm commitments are made to customers. Finally, the workforce forecasting module contains information on employee skills and leave and attrition rates. This information provides shop and depot managers with an accurate picture of the overall number of employees and the number that are available in each work center. Analyzing the workforce by skill groups allows depot commanders to plan for the amount of work that can be handled and to consider overtime, contracting, or reassigning workers among different work centers. The Army has been developing AWPS since February 1996. The established goals for AWPS are (1) having all three modules operational at all five depots by January 1998; (2) operating a supplementary module (i.e., resource schedule and control) for supporting personnel assignments to projects by fiscal year 2000; and (3) having all modules on line and operational at depots, arsenals, and ammunition facilities by fiscal year 2000. However, as of December 1997, the Office of the Assistant Secretary of the Army for Manpower and Reserve Affairs had not written the subplan for implementing AWPS or identified the specific steps or milestones needed to achieve the goals. Instead, the Army has set short-term, interim steps as AWPS progresses. For example, in August 1997, the Army established specific steps for correcting data errors between August 1997 and January 1998. Without a detailed implementation schedule, however, the Army lacks the tools it needs to ensure that it can meet the milestones in the material weakness plan. For example, the Army’s milestone for implementing AWPS at arsenals and ammunition facilities has already changed from July to December 1998. Figure 2 shows the difference in milestones between the June 1997 draft plan and the October 1997 approved plan. Implementation of AWPS at Corpus Christi Army Depot and other locations has been more difficult than the Army originally estimated. For example, in response to our February 1997 report, DOD reported that AWPS had been successfully tested at Corpus Christi Army Depot. Also, the Army expected the system to be operational at all five depots by March 1997. However, according to Industrial Operations Command officials, the Army must still test and validate two of the three modules at Corpus Christi and correct data errors from feeder systems. Our review showed that, even though AWPS equipment and software had been installed at all five depots, none of the three modules is being fully used at any location, including Corpus Christi. Army Materiel Command officials cited problems that could affect the Army’s ability to implement AWPS at the depots by December 1997. First, the performance measurement control module has been undergoing testing and validation since March 1997 and was planned to be fully operational by December 1997, assuming that the data errors would be corrected. As of August 1997, the Corpus Christi Army Depot was correcting data errors and therefore was not using this module to manage any depot work, not even work at the shop floor level as the Army had originally claimed. The other two modules are planned to be operational by February 1998, assuming that the data errors are corrected. An unresolved problem in the workload forecasting module is how to program work that will be started in one fiscal year and completed during the following fiscal year. The amount of repair work assumed affects management decisions on planning and scheduling the work and the workforce needed. Second, the Army states in its material weakness plan that AWPS training at the five depots was to be completed by December 1997. However, as of November 1997, AWPS users were not fully trained, and some training requirements were not yet defined. Army officials stated that training on the performance measurement control module has been completed at the five depots. However, Corpus Christi Army Depot officials stated in August 1997 that 257 staff members at the depot have been trained. The depot employs approximately 1,500 personnel. Although not all 1,500 personnel need further training, depot and Industrial Operations Command officials agreed that additional training is required to teach shop floor supervisors and depot managers how to interpret AWPS data and how to use it to identify work areas needing improvement. Command officials stated that training for the workforce forecasting module was to be completed by December 1997, but training requirements for the workload forecasting module have not yet been defined. Last, Industrial Operations Command officials told us that AWPS is still an evolving concept and that corporate-level system requirements are not yet defined. For example, no final decision has been made concerning whether this Command and the Army Materiel Command will install the Decision Support System, which would enable commands to examine data from subordinate units and help identify processes that could be reengineered to improve performance. In December 1997, Army officials decided to add a material module to monitor ordering and delivery of repair parts. According to Army officials, the Army could realize benefits once AWPS is operational and system users are trained. In July 1997, the Assistant Secretary of the Army for Manpower and Reserve Affairs stated that all depots and arsenals using AWPS will be able to match workload requirements and personnel projections. Thus, any personnel reductions will be based on the knowledge of work that will not be performed. AWPS could also be used for setting performance goals, such as reducing repair costs and cycle times, but Army officials stated that they have no intentions of using AWPS for this purpose. The Civilian Manpower Integrated Costing System will be the Army’s distributed, integrated database for costing institutional personnel requirements and linking workload and workforce to the budget. Army officials expect this system to provide funding information for various workload and workforce levels that the 12-step method and AWPS project. However, the subplan detailing the specific steps and milestones for implementing the system has not been developed. Without the subplan, the Army has no mechanism to measure its progress; therefore, managers will not know whether intervention is necessary to meet milestones. The system is essential for the Army to effectively prioritize work to be funded and clearly identify work remaining unfunded. The material weakness plan includes an October 1999 milestone for implementing the Civilian Manpower Integrated Costing System at Army headquarters and major commands and using the system to base institutional budgets on workload analyses. The plan only includes one interim step, and the milestone for this action has slipped. For example, the milestone for implementing the system at Army headquarters changed from May to December 1998. Also, monitoring progress is essential because offices other than Manpower and Reserve Affairs are involved. According to Manpower and Reserve Affairs officials, the Financial Management and Comptroller’s office is developing part of the system. The officials also stated that successful implementation will require compatible equipment at major commands and training the command’s personnel how to use the system. However, milestones for these events are not identified. Delays in implementing the material weakness plan’s corrective actions could hamper the Army’s efforts to efficiently allocate its institutional resources. The Army’s workload analysis methods (12-step and AWPS) could enhance future decisions affecting institutional force structure. The 12-step methodology includes an analysis to structure organizations efficiently and assess whether positions should be filled by military, civilian, or contractor personnel. Such information could be useful to managers in deciding how to allocate reductions with the least effect on accomplishing institutional missions. The Army programmed reductions of 6,200 institutional positions during fiscal year 1998 and another 7,000 positions between fiscal year 1999 and 2003. The Quadrennial Defense Review mandates further reductions of 33,700 civilian positions and some active Army positions. Delayed implementation may result in these planned reductions being made without the benefit of workload analysis and assessments of risks and tradeoffs. Force XXI Institutional Redesign is the Army’s effort to reengineer its processes and streamline its organizational structure. It includes consolidating major commands and realigning their missions to more efficiently perform institutional functions. The Army defines reengineering as a “. . .fundamental rethinking and radical redesign of business processes to achieve dramatic improvements in critical, contemporary measures of performance.” The Army Vice Chief of Staff is responsible for reengineering Army processes and organizations. The Deputy Chief of Staff for Operations and Plans is the executive agent for redesign assessments and is responsible for performing day-to-day support functions. Other Army headquarters offices are responsible for conducting the assessments and implementing approved initiatives. Even though redesign efforts began in January 1995, there has been no net decrease in the number of major commands, and two of the redesign studies have been canceled. Also, the dollar and personnel savings estimates are overstated. The Army reported that redesign’s phase I initiatives would save $1.7 billion over 6 years and that implementation would cost almost $27 million. The savings estimates are overstated because they do not include significant implementation costs of at least $405 million. Also, most of the 4,000 active Army positions that were to be transferred from institutional to operational forces were based on assumptions that may not occur. In addition, since no single office monitors the results of institutional redesign efforts, the Army has no systematic way of knowing the status of savings, implementation costs, or institutional position transfers. The Army’s redesign document, draft Pamphlet 100xx, states that it is intended to provide a vision for redesigning the institutional force and serve as the foundation for institutional doctrine. The pamphlet states general goals of improving institutional force efficiencies but, other than proposing models for reducing the number of major command headquarters, does not cite specific, measurable performance goals. However, the Government Performance and Results Act requires federal agencies to identify strategic goals and develop performance measures to gauge progress toward achieving each goal. The pamphlet is consistent with this principle, stating that “clear performance measures should be identified to gauge organizational progress.” The Army’s institutional redesign effort has not reduced the number of major commands, even though redesign documents state that the Army will strive to do so. The redesign pamphlet introduces organizational models that would reduce the Army’s current 15-major command structure to 8 or 3 major commands. For example, the three-command structure would manage the Army’s core capabilities of developing the force, generating and projecting the force, and sustaining the force. Army headquarters would retain responsibility for directing and resourcing capabilities. During redesign phase I, the Army redesignated a major command—the Information Systems Command—as a subcommand of Forces Command. The Army also created a new major command—the Space and Missile Defense Command. Thus, there has been no net decrease in the number of major commands. Some redesign transfer of functions from Army headquarters to major commands have not yet resulted in significant efficiencies. For example, the Recruiting Command transferred intact from Army headquarters to the Training and Doctrine Command in October 1997 as a major subordinate command. However, there has only been a decrease of less than one-half of 1 percent in the Recruiting Command’s institutional positions. According to Army data, the Recruiting Command had 9,256 positions in fiscal year 1997, and the Army projects 9,210 positions in fiscal year 1998. The Army plans to merge the Recruiting Command with the Training and Doctrine Command’s Cadet Command in October 1999, a move that the Army expects will result in organizational efficiencies and fewer institutional positions. The Army plans to conduct a business process reengineering study to determine the most effective and efficient organization which is expected to result in fewer organizational layers. The Army initially planned to examine the following seven areas during phase II of the redesign effort: installation management; unit, joint, and interservice training; security and law enforcement; financial management; medical and health; intelligence; and supply, services, and materiel. However, officials from the Deputy Chief of Staff for Operations and Plans told us that the financial management and supply, services, and materiel assessments have been canceled. According to these officials, the Financial Management and Comptroller’s office has chosen to use internal efforts to identify financial management efficiencies, rather than complete an institutional redesign assessment, because the Assistant Secretary of the Army for Financial Management and Comptroller reports to the Secretary of the Army and not the Army Vice Chief of Staff. Further, the officials said that the Army Materiel Command will not complete the supply, services, and materiel assessment because the Command was responding to the mandated Quadrennial Defense Review reductions. Pamphlet 100xx encourages outsourcing institutional functions. A recent Defense Science Board study and the Quadrennial Defense Review also concluded that some institutional functions should be contracted. If the Army reduced its reliance on active military institutional personnel, more military personnel would be available for operational units, including deployable support units, which have historically experienced shortfalls.Although the number of Army institutional positions has decreased since 1992, Army data show that the proportion of active Army institutional to operational forces has remained at about 29 percent and is projected to remain at this level through fiscal year 2003, as shown in figure 3. In addition, the proportion of all active Army institutional positions to the total number of institutional positions is projected to increase slightly, from 34.2 percent in 1992 to 35.8 percent in 2003. Table 1 compares the number of military and civilian institutional positions in fiscal years 1992 and 2003. Phase I savings estimates are overstated because significant costs are not included in the Army’s 1998-2003 Program Objective Memorandum and savings estimates are not definitive. Specifically, the memorandum included savings of $1.7 billion and almost $27 million in implementation costs resulting from 107 institutional redesign initiatives. However, at least $405 million in implementation costs were not included in the memorandum. The Army Program Analysis and Evaluation Office is to develop the service’s Program Objective Memorandum. According to an official from this office, limited cost data were included in the memorandum because the offices responsible for the initiatives did not provide cost data in a timely manner. For example, the Army Program Analysis and Evaluation Office did not include $69 million in implementation costs for five logistics initiatives. In addition, the Army Program Analysis and Evaluation Office did not include implementation costs for the Senior Reserve Officer’s Training Corps initiative. This initiative proposed replacing the Senior Reserve Officer’s Training Corps active duty institutional personnel with a combination of active, reserve, or contracted former military personnel. The Reserve Officer’s Training Corps program is in place at 300 colleges and enables students to graduate with a degree and receive an officer’s commission. In February 1996, the Deputy Chief of Staff for Operations and Plans initially estimated that the Senior Reserve Officer Training Corps initiative would cost $336 million over a 4-year period for contracting retired personnel to replace all 2,100 active Army personnel in the corps. However, the Army’s Program Objective Memorandum did not include any of the implementation costs associated with the hiring of contractor personnel to conduct the Senior Reserve Officer’s Training Corps program. The Army’s memorandum only included implementation cost of $2 million for RAND to study the concept of hiring contractor personnel. Once implemented, the initiative is expected to require a recurring operations and maintenance cost of $40,000 per contractor per year. For example, if all 2,100 active Army personnel were replaced by contractors—in general, retired officers—the Army would incur a cost of $84 million per year. Additionally, the $1.7 billion savings estimate is not definitive because two offices disagree on the savings anticipated. The Deputy Chief of Staff for Logistics is responsible for five logistics initiatives, which represent 40 percent of the $1.7 billion phase I savings. The Army Program Analysis and Evaluation Office and the Office of the Deputy Chief of Staff for Logistics identified savings that varied from $12 million to $57 million per initiative, as shown in table 2, even though the net difference amounted to approximately $13 million. According to the Army Program Analysis and Evaluation Office, the savings are also included in the 1998-2003 Future Years Defense Program, but specific initiatives and groupings (such as logistics-related initiatives) cannot be tracked because they are combined with other Army efforts. According to the Principal Deputy Assistant Secretary of the Army for Financial Management and Comptroller, the burden is on the major command to identify a “substitute bill payer” if redesign savings cannot be achieved, since the savings have already been included in the fiscal year 1998-2003 Program Objective Memorandum. Officials from the Army Program Analysis and Evaluation and Deputy Chief of Staff for Operations and Plans offices concurred. For example, the fiscal year 1998-2003 memorandum claimed savings of approximately $430 million for the single stock fund initiative. This effort is based on the belief that a single stock fund eliminates duplicative materiel and financial management functions. However, the Army Materiel Command, as the proponent for this initiative, claims that fiscal year 1998 projected savings of $30 million will not be realized because of problems in implementing changes to financial systems. The Principal Deputy Assistant Secretary for Financial Management and Comptroller told us that $30 million in savings will be realized in fiscal year 1998—either from this initiative or elsewhere. Phase I projected that 3,914 active Army positions could be transferred from institutional to operational forces, but most of these transfers were based on assumptions that may not occur. Our February 1997 report stated that many of the active Army space transfers were based on initiatives that have not been fully tested or approved; therefore, the savings were not assured. As a result, we recommended that the Secretary of the Army closely monitor the military positions the Army planned to save from the redesign initiatives and have a contingency plan in place in the event the personnel savings do not materialize. DOD concurred with this recommendation. Officials from the Deputy Chief of Staff for Operations and Plans stated that the 3,914 spaces were transferred from institutional to operational forces, but most of the spaces did not come from the phase I redesign initiatives. Two initiatives, which account for 2,850, or 73 percent, of the 3,914 active Army spaces, will not produce the projected number of spaces. For example, the Senior Reserve Officer’s Training Corps initiative was expected to transfer 2,100 positions. However, RAND currently estimates the initiative will yield between 800 and 1,050 spaces because the Army decided not to contract out all 2,100 spaces and is testing a combination of active, reserve, and contract personnel. Additionally, the transfer of positions is also based on reducing attrition.The Army assumed reduced attrition would free 750 training and recruiting institutional positions. If personnel stay in the Army, then it would not need to recruit and train replacements. However, the Training and Doctrine Command currently projects an increase in initial entry training requirements from fiscal years 1998 to 1999 rather than a decrease. Army headquarters officials acknowledged that they could not explain how the 750 spaces were calculated, and Command officials said that it was not involved in deriving the 750 spaces. Even though the Deputy Chief of Staff for Operations and Plans is the executive agent for redesign assessments, there is no single office that systematically manages and monitors redesign results. Therefore, at any given time, the Office of Operations and Plans does not know the status of specific initiatives, dollar savings, implementation costs, or progress in reducing institutional positions. In addition, any differences in projected costs and savings have not been reconciled between the Army Program Analysis and Evaluation and the Army offices responsible for specific initiatives. The Assistant Secretary of the Army for Financial Management and Comptroller is responsible for quarterly Army performance reviews. These reviews are for the Secretary of the Army, Army Chief of Staff, and function chiefs to discuss problem areas. However, the Office of the Assistant Secretary states that institutional redesign initiatives are monitored and discussed only if their implementation or savings become jeopardized. A Financial Management and Comptroller written statement explains that none of the redesign initiatives have been designated as topics to be monitored during the quarterly reviews, including the Senior Reserve Officer’s Training Corps, which accounts for one-half the projected position transfers and $336 million in unaccounted for implementation costs. A Financial Management and Comptroller official told us that some quarterly reviews were canceled and never rescheduled and that monitoring institutional redesign results would require an investment of too many resources. Pamphlet 100xx discusses the need to achieve efficiencies in performing institutional functions. For example, the pamphlet states that it is necessary to demonstrate that cost savings and/or operational efficiencies will result from implementing redesign initiatives. However, beyond this overarching guidance, the Army has not set specific, measurable, performance goals and assessed the Army’s progress in achieving them (e.g., program outcome evaluations). More recently, DOD guidance to the military services for implementing the Government Performance and Results Act states that the services should identify performance measures that demonstrate how the services’ plans, such as Army Force XXI, achieve the goals of DOD’s strategic plan. A sound, analytically based methodology would help the Army to ensure that its institutional force is efficiently organized and comprises the minimum number of personnel. Such a methodology is essential for the Army to make data-based decisions on how to allocate resources among institutional organizations, have assurance the highest priority functions are funded, and be aware of the risks in not funding some institutional functions. The use of workload-based criteria in implementing the programmed downsizing of 13,000 positions and Quadrennial Defense Review reductions could help the Army minimize effects on its ability to perform institutional functions and may help introduce more efficient organizations and processes. A smaller institutional force may also generate savings the Army could apply to its modernization programs or its operational forces. Although the Army has a plan to correct this material weakness, the plan is incomplete, and the Army may have difficulty accomplishing the corrective actions within established time frames. The plan provides a mechanism to ensure compliance with the Army’s methodology for determining institutional requirements. However, if the plan’s certification and quality assurance milestones are extended due to insufficient resources, the Army will be making reductions without knowing if commands are performing the analyses required to make sound decisions about staffing levels and reduce the cost of accomplishing institutional functions. The Army’s plan is to simultaneously develop workload approaches (12-step method and AWPS) and a system to calculate the cost for institutional positions. Until all three efforts are completed and integrated, the Army cannot be assured that it has the minimum essential institutional force, and the Army’s planning, programming, budgeting, and execution system for institutional functions will not be based on workload. If key subplans remain undeveloped, the Army has no method for assessing its progress toward meeting the plan’s current completion date of December 1999. As a result, further reductions or retention of institutional personnel may result without the benefits of workload analysis and assessments of risks and tradeoffs. Army oversight is necessary to ensure that Force XXI institutional redesign results are achieved. To date, the Army has not identified specific, measurable redesign goals, even though its own guidance acknowledges the importance of doing so. Army documents include general goals for improving institutional efficiency but, other than reducing the number of major commands, do not specify measures to achieve efficiencies. Without measurable performance goals, it may be difficult for the Army to know when its vision for the institutional force, as stated in Pamphlet 100xx, is achieved. Further, savings will be less than projected. In fact, the Army may not know the source of the savings because no single office monitors the status of redesign initiatives or their implementation costs. The Force XXI redesign concept includes proposals to reduce the number of major commands and realign their functions. Since the 12-step methodology includes analyses of how to structure and staff organizations efficiently, the Army could coordinate implementing major command realignments with the 12-step analysis techniques. Such coordination could result in institutional efficiencies, which would provide the Army an opportunity to transfer military institutional personnel to fill shortfalls in support forces. This transfer would increase the proportion of Army resources devoted to missions and decrease the proportion devoted to infrastructure. To improve the Army’s ability to accurately project institutional requirements, allocate institutional personnel, and make informed, analysis-based decisions on risks and tradeoffs, we recommend that the Secretary of the Army complete subplans of the material weakness plan, modify milestones to accurately reflect available resources to accomplish corrective actions, and closely monitor results. To improve the Army’s ability to accurately project institutional requirements derived from AWPS, we recommend that the Secretary of the Army direct the Assistant Secretary for Manpower and Reserve Affairs to develop a long-range master plan to implement AWPS, including milestones and definitions of corporate-level requirements. To improve the Army’s ability to make informed, analysis-based decisions on benefits, risks, and tradeoffs in realigning major command organizations and functions, we recommend that the Secretary of the Army require that workload-based analyses, such as the 12-step methodology, be used to demonstrate the benefits, risks, and tradeoffs of Force XXI institutional redesign decisions. To improve the Army’s ability to oversee reforms for increasing the effectiveness and efficiency of its institutional force, we recommend that the Secretary of the Army assign a single office the responsibility to provide management and oversight of the institutional redesign process to include identifying clear, specific, and measurable performance goals; publishing these goals in a final version of Pamphlet 100xx; monitoring savings and implementation costs; and periodically reporting results achieved along with the stated goals and projections of the initiatives’ savings and implementation costs. In written comments on a draft of this report, DOD generally concurred with the report and all recommendations. DOD also stated that it will request that the Army take appropriate action to implement our recommendations. DOD’s comments are reprinted in their entirety in appendix III. We are providing copies of this report to the Secretaries of Defense and the Army, other appropriate congressional committees, and the Director of the Office of Management and Budget. We will also provide copies to other interested parties on request. Please contact me at (202) 512-3504 if you or your staff have any questions concerning this report. Major contributors to this report are listed in appendix IV. To determine the extent to which the Army addressed its historical weakness in determining institutional requirements, we compared the June 1997 draft material weakness plan to the October 1997 approved plan to identify changes in milestones, progress in subplans’ development, and the intent to administer and monitor the plan. We also examined Army and Department of Defense (DOD) guidance and regulations regarding implementation of workload-based systems and processes, such as the 12-step method and the Army Workload and Performance System (AWPS). We obtained documentation and interviewed knowledgeable Army officials from Army Headquarters, Office of the Assistant Secretary of the Army for Manpower and Reserve Affairs, and the Deputy Chief of Staff for Operations and Plans—Directorate of Force Programs, Washington, D.C.; and the Industrial Operations Command, Rock Island, Illinois. We observed pilot testing of AWPS at the Corpus Christi Army Depot, Texas. We also analyzed documents and held discussions regarding AWPS system implementation, status of training, management of depot workload, systems requirements, and the lack of updated project schedules. We obtained relevant documentation on existing requirements determination processes at Forces Command, Fort McPherson, Georgia; the Army Training and Doctrine Command, Fort Monroe, Virginia; the Army Materiel Command, Alexandria, Virginia; and the Management Engineering Activity, Huntsville, Alabama, which performed the Army Materiel Command’s requirements assessments. The three major commands account for 43 percent of the civilian institutional workforce and 47 percent of the military institutional workforce. We compared each of the command’s processes with the 12-step method to identify compliance and program differences. We also assessed the use of the requirements determination processes in budget formulation and execution at Army Headquarters and the major commands. To assess the extent to which the Army’s streamlining initiatives identified opportunities to reduce Army personnel devoted to institutional functions and realize savings, we reviewed streamlining guidance and interviewed knowledgeable officials from the Office of the Assistant Secretary of the Army for Manpower and Reserve Affairs; the Directorate of Force Programs within the Deputy Chief of Staff for Operations and Plans; the Assistant Secretary of the Army for Financial Management and Comptroller; the Deputy Chief of Staff for Logistics; Forces Command; Training and Doctrine Command; and Army Materiel Command. During discussions with the officials, we obtained documentation describing each initiative, estimated implementation costs, and dollar and personnel savings for the 107 fiscal year 1998-2003 institutional redesign initiatives. We also obtained documentation and discussed military position transfers from institutional to operational forces. However, our assessment focused on those initiatives that represented the largest percentage of the phase I redesign savings. Officials from the Army’s Office of Program Analysis and Evaluation provided documentation of the dollar savings included in the 1998-2003 Program Objective Memorandum and validated our analysis of those numbers. The Office of the Assistant Secretary of the Army for Manpower and Reserve Affairs validated the personnel savings. To identify the distribution of active Army and civilian institutional personnel and analyze institutional trends, we obtained the number of institutional positions from fiscal years 1992 to 2003 from the Army Force Management and Support Agency’s Structure and Manpower Allocation System database. We did not conduct a full reliability assessment because the data used in the report are for background and context and are not vital to audit results. However, Army officials explained the imbedded system edits they rely on to detect data errors and protect data integrity. Additionally, we independently corroborated the numbers at two commands. On the basis of our comparisons and the description of the database’s system edits, we were satisfied that these data are the best available and that they accurately support our statements on institutional composition and trends. We conducted our review from April to December 1997 in accordance with generally accepted government auditing standards. Brenda M. Waterfield, Evaluator-in-Charge Jeanett H. Reid, Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a legislative requirement, GAO reviewed the extent to which the Army has: (1) taken corrective action to resolve its material weakness in determining institutional personnel requirements; and (2) identified opportunities to reduce personnel and realize savings through its Force XXI Institutional Redesign Effort. GAO noted that: (1) the Army developed a corrective action plan to resolve its material weakness in determining institutional personnel requirements but may have difficulty achieving the plan's completion date; (2) two critical subplans have been developed, one that implements a new costing system and another that develops a new computer-based methodology--the Army Workload Performance System (AWPS); (3) without specific steps and milestones for both of these efforts, the Army lacks the tools it needs to ensure that the plan will be completed by December 1999; (4) milestones for both efforts have slipped from original estimates, and in the case of the computer-based methodology, the Army has missed some of its interim goals; (5) in addition, a plan initiative to ensure that major commands use a 12-step methodology to analyze workload may not be implemented on time unless more personnel are assigned to the office responsible for this effort; (6) the Army's institutional redesign effort has not resulted in a reduction in major command headquarters, and the dollar and position savings identified are overstated; (7) one redesign initiative resulted in the redesignation of a major command as a subcommand; (8) however, the Army also created a new command, resulting in no net decrease in the number of commands; (9) also, the Army transferred a command but did not recognize it to achieve efficiencies; therefore, this effort produced virtually no decrease in the command's 9,000 positions; (10) the Army transferred about 2,800 active Army positions from institutional to operational forces based on two initiatives, but these initiatives did not produce the anticipated savings, and personnel cuts had to be made elsewhere; (11) the Army's efforts to establish workload-based requirements and redesign institutional functions have produced few results; (12) Army personnel trend data from 1992-2003 show that the Army has not been successful in reducing the proportion of institutional to operating forces within the active Army; (13) in addition, the Army does not currently have a workload basis for allocating its personnel resources among institutional organizations and ensuring that the highest priority functions are funded first; (14) as a result, the Army may not have the analysis it needs to efficiently allocate many of the institutional positions that are programmed to be eliminated by fiscal year 2003 or additional reductions mandated by the Quadrennial Defense Review; and (15) without senior leadership attention, the Army's current initiatives may not achieve meaningful and measurable changes. |
Our objectives were to assess (1) the overall status of State’s efforts to identify and correct its date-sensitive systems and (2) the appropriateness of State’s strategy and actions for remediating Year 2000 problems. In conducting our review, we assessed State’s Year 2000 efforts against our Year 2000 Assessment Guide. This guide addresses common issues affecting most federal agencies and presents a structured approach, as well as a checklist, to aid in planning, managing, and evaluating Year 2000 programs. This guidance describes five phases supported by program and project management activities. Each phase represents a major Year 2000 program activity or segment. The phases and a description of each follow. Awareness - Define the Year 2000 problem and gain executive-level support and sponsorship for a Year 2000 program. Establish a Year 2000 program team and develop an overall strategy. Ensure that everyone in the organization is fully aware of the issue. Assessment - Assess the Year 2000 impact on the enterprise. Identify core business areas and processes, inventory and analyze systems supporting the core business areas, and prioritize their conversion or replacement. Develop contingency plans to handle data exchange issues, lack of data, and bad data. Identify and secure the necessary resources. Renovation - Convert, replace, or eliminate selected platforms, systems, databases, and utilities. Modify interfaces. Validation - Test, verify, and validate converted or replaced platforms, systems, databases, and utilities. Test the performance, functionality, and integration of converted or replaced platforms, systems, databases, utilities, and interfaces in an environment that faithfully represents the operational environment. Implementation - Implement converted or replaced platforms, systems, databases, utilities, and interfaces. Implement any and all contingency plans needed. We also assessed State’s efforts against our Year 2000 Business Continuity and Contingency Planning Guide, which was issued as an exposure draft in March 1998. The guide provides a conceptual framework for helping large agencies manage the risk of potential Year 2000-induced disruptions to their operations. Like our Assessment Guide, it offers a structured approach for reviewing the adequacy of agency Year 2000 business continuity and contingency planning efforts. To determine the overall status of State’s Year 2000 program, we analyzed the Department of State’s Year 2000 database, which includes data collected on a monthly basis from all of State’s bureaus, for four separate reporting periods: August 1997, December 1997, March 1998, and May 1998. State uses this database to track and measure program progress. We also reviewed the status reports State provided to the Office of Management and Budget (OMB) on a quarterly basis. To determine how State’s bureaus were implementing department policy and managing their Year 2000 program efforts, we interviewed Year 2000 coordinators at bureaus including Consular Affairs, Financial Management and Planning, Personnel, Diplomatic Security, and Information Management. We met with officials from the Diplomatic Telecommunications Service Program Office to determine what steps they were taking to ensure that telecommunications systems were Year 2000 compliant. We also reviewed internal State documents and reviews. We conducted our work from April 1997 through July 1998 in accordance with generally accepted government auditing standards. We requested written comments on a draft of this report from the Secretary of State or her designee. The Acting Chief Financial Officer provided us with written comments that are discussed in the “Agency Comments and Our Evaluation” section and are reprinted in appendix I. Most of State’s automated information systems are vulnerable to the Year 2000 problem, which is rooted in the way dates are recorded and computed in automated information systems. For the past several decades, systems have typically used two digits to represent the year, such as “97” representing 1997, in order to conserve on electronic data storage and reduce operating costs. With this two-digit format, however, the Year 2000 is indistinguishable from 1900, or 2001 from 1901, etc. In addition, any electronic device that contains a microprocessor or is dependent on a timing sequence may also be vulnerable to Year 2000 problems. This includes, but is not limited to, computer hardware, telecommunications equipment, building security systems, elevators, and medical equipment. Should State fail to address the Year 2000 problem in time, its mission-critical operations could be severely degraded or disabled as the following examples illustrate. The failure of State’s Consular Lookout and Security System (CLASS) would hinder the ability of overseas posts to effectively screen visa applicants who may have a criminal and/or terrorist background. Embassy operations, such as property management and visa and passport processing, could be hindered at certain locations if State is unable to replace all of its noncompliant systems. State’s messaging systems, which are critical to the effective conduct of diplomatic missions, could fail if telecommunications devices are not replaced or upgraded. State has 262 systems comprising approximately 35 million lines of code written in over 17 programming languages. Major corporate systems include the Central Financial Management System (CFMS), the Central Personnel System (CPS), and CLASS. Through a strategy of system conversion and replacement, the department plans to remediate all of its noncompliant systems by March 31, 1999. State supports its systems on a variety of hardware platforms, most of which are not Year 2000 compliant and will need to be fixed. Some of its corporate systems are operated on IBM mainframe computers at data processing centers in the Washington, D.C., area and overseas. According to State, some of its operating systems use antiquated “home grown” code and are presently not Year 2000 compliant. This environment is not stable, and State is currently working to resolve the issue. The department also operates a variety of decentralized information technology platforms at posts around the world, including about 250 Wang VS minicomputers; 20,000 personal computers; and several hundred local area networks. Foreign service officers rely on this equipment for electronic mail, word processing, and other functions to develop reports and communicate information in support of State’s foreign policy objectives. The Wang minicomputers will be replaced as part of State’s effort to modernize its information technology infrastructure. This project is known as A Logical Modernization Approach (ALMA). According to State’s IRM Tactical Plan, the ALMA project will (1) ensure that legacy Wang VS equipment and software is replaced by December 31, 1999, and (2) implement modern, open, and standards-based systems throughout the department. Under the direction of State’s Bureau of Information Resources Management, the department plans to deploy the ALMA infrastructure to all of State’s posts by the end of fiscal year 1999. State plans to resolve its Year 2000 problem using a phased process. In keeping with its decentralized approach to information technology management, State has charged its bureaus with responsibility for ensuring that all of their systems process dates correctly. Further, State is requiring the bureaus to redirect existing funds to correct their systems and will provide no additional funds for Year 2000 remediation. Although State estimated in its May 1998 quarterly report to OMB that it would cost $153 million to address its Year 2000 problem, in commenting on a draft of this report, the department stated that it is currently collecting and analyzing cost data and that an overall figure has not been finalized. State’s Chief Information Officer (CIO) has overall responsibility for ensuring Year 2000 compliance. In addition, State has appointed a full-time Deputy CIO for Year 2000. The department also established a Year 2000 Steering Committee to (1) review new and ongoing information resources management (IRM) and non-IRM systems with regard to Year 2000 compliance, (2) conduct monthly reviews of Year 2000 efforts of all bureaus, and (3) reallocate resources across the department to meet Year 2000 needs as necessary. The Year 2000 Steering Committee is chaired by the Under Secretary for Management, and its membership includes the CIO, the Deputy CIO for Year 2000, the Chief Financial Officer, the Inspector General, the Assistant Secretaries of State for Diplomatic Security, Consular Affairs and Administration, and other senior officials. The CIO and the Year 2000 project manager monitor critical project implementation at key decision points and make specific recommendations to the Steering Committee. This committee meets monthly. Table 1 depicts the organizations involved in Year 2000 activities and their respective responsibilities. To increase the awareness of Year 2000 problems and to foster coordination among components, State has taken the following actions. In an April 1996 memo, the CIO alerted bureaus to the problem and called on them to attend a meeting to discuss the issue. In May 1996, State established a Year 2000 Project Office to manage the department’s Year 2000 program. In April 1997, the Year 2000 Project Office issued its Year 2000 Project Plan, which outlines the department’s strategy for achieving Year 2000 compliance. Subsequently, the project office distributed formal standards and guidance, including (1) a memorandum to all application developers (both in-house and contractor) providing guidance on Year 2000 data formats governing internal and external data exchange between information systems, (2) cable notices to all overseas posts informing them about the Year 2000 problem and identifying the steps they need to take to resolve the problem, and (3) Year 2000 planning and reporting guidance requiring bureaus to develop Year 2000 project plans and to provide quarterly (later changed to monthly) progress reports. In December 1997, State’s Year 2000 Project Office issued draft Year 2000 test planning and certification guidance to the department. This document describes the department’s Year 2000 test planning requirements, strategy, and schedule. In addition, the guidance identifies Year 2000 renovation test facilities for the IBM Mainframe, Wang, and PC/LAN test environments. In March 1998, State enlisted the Inspector General to help monitor its Year 2000 program, validate the data on Year 2000 status being reported by each component, identify problem areas, and recommend corrective actions. In March 1998, State reorganized the management of its Year 2000 effort. A Deputy CIO for Year 2000 was appointed as part of the general CIO office. The Under Secretary of State (Management) made each of the assistant secretaries personally responsible for ensuring that each of their bureaus is Year 2000 compliant. Finally, an additional contractor, KPMG Peat Marwick LLP, was brought in to work alongside State personnel and the contractor already in place, Adsystech. KPMG Peat Marwick LLP was tasked with assisting in the overall management of the Year 2000 effort; Adsystech had been given responsibility for providing technical advice to bureaus for remediating systems. Adsystech is also responsible for collecting and analyzing data on the remediation process, and coordinating technical matters between State Department management and individual bureaus. Using its assessment methods, State has identified a total of 262 systems, 64 mission critical and 198 nonmission critical. State has also determined that 40 mission-critical systems need to be remediated—27 of these need to be replaced and 13 need to be converted. In addition, State reports that 146 nonmission-critical systems need to be converted, replaced, or retired. Details of State’s assessment of its systems, as reported for May 1998, are shown in table 2. State’s progress in remediating systems has been inadequate. Of the 40 systems State has identified as mission-critical and is either converting or replacing, only 17 (about 42.5 percent) have completed renovation, 11 have completed validation, and only two have completed implementation. Tables 3 and 4 show the number of applications that have completed each phase along with the number of applications that have started but have not yet completed the phase. In addition, the department has already conceded that it will not achieve its goal of eliminating all of its Wang software and hardware systems by the year 2000. As part of its IRM modernization program, State originally planned to eliminate all of its Wang VS systems (which include 21 mission-critical noncompliant systems) and begin running them on the Windows NT platform before January 1, 2000. According to State officials, however, because of delays in converting the Wang Systems to the Windows NT platform, the department will have to continue running some systems on the Wang platform after January 1, 2000. If all of the Wang systems cannot be replaced or made compliant before the year 2000, the department will not be able to run all of its mission-critical administrative applications overseas. Further, a May 1998 report found that five of the mission-critical systems reported to OMB as compliant were, in fact, noncompliant and needed some form of additional remediation. The report also noted that 13 of all mission-critical systems were in a low degree of preparedness for certification and 8 systems were in a moderate degree of preparedness. In addition, seven of the mission-critical systems in a low degree of preparedness were scheduled to miss the OMB milestone date for implementation by 5 months, pushing their expected implementation to September 1999. These included systems essential to citizen services, such as immigrant and nonimmigrant visa issuance and tracking, and embassy and post security. One of these systems, the Immigrant Visa System, was reported to OMB as compliant. An additional system, the Non-Immigrant Visa System, was scheduled to miss the OMB milestone date for implementation by 1 month. As noted in our Assessment Guide and our Contingency Planning Guide, the Year 2000 problem is not just an information technology problem, but primarily a business problem. Thus, the process of identifying, ranking, and remediating information systems should include an identification of core business areas and business processes and assessments of the impact of information system failures on those business areas and processes. If this is not done, the agency will not have a good basis for prioritizing systems for correction or developing contingency plans that focus on the continuity of operations. Until recently, State’s Year 2000 effort lacked a mission-based perspective. For example, at the time of our review, State had not determined its core business functions and linked these functions to its mission or to its support systems. In addition, the department had not conducted formal risk analyses of the majority of its systems. In responding to a draft of this report, State noted that it is currently developing a framework for a mission-based perspective for its Year 2000 problem. It has recently determined its core business functions and linked these functions to its mission. However, it has not yet linked its core business functions to support systems necessary to conduct these operations. As further illustrated below, until it fully adopts this perspective, State will not be able to adequately prioritize its systems or develop meaningful contingency plans. According to our Assessment Guide, an important aspect of the assessment phase is determining and prioritizing the correction of the systems that have the highest impact on an agency’s mission and thus need to be corrected first. This helps an agency ensure that its most vital systems are corrected before systems that do not support the agency’s core business. State has provided its bureaus with a definition of priorities—routine, critical, and mission-critical—and charged them with the task of identifying and ranking their respective systems according to this definition. Mission critical, the highest priority, was defined as crucial to worldwide operations, affecting the public directly, or having national security implications. Subsequently, the bureaus assessed their respective systems and each provided the Year 2000 Project Office with a list of systems—64 in total—that they determined were mission-critical to department operations. However, this process is flawed because it provides no means of distinguishing between individual bureaus’ priorities—some of which are essential to State’s core mission and some of which are not. For example, the following systems have been ranked by individual bureaus as mission critical: REGIS, a system designed to register and track students who attend the MSE Network, a system used to sort and track unclassified mail and CLASS, a system designed to identify criminals and possible terrorists in order to block their entry into the United States; CRIS, an on-line database used to track citizens involved in crises overseas; and ICARS, a system used for immigration control and reporting. Clearly, CLASS, CRIS, and ICARS are much more important to State’s core missions than REGIS and MSE. But under State’s Year 2000 approach, they rank equally. Until State begins focusing on core business areas and processes, it will not have a basis for further ranking these systems for remediation. Additionally, it appears that State has not placed enough priority on fixing its mission-critical systems before its nonmission-critical systems. In fact, as tables 3 and 4 indicate, State is making better progress on its nonmission-critical systems than on its mission-critical systems. For example, 31, or 21 percent, of nonmission-critical systems have reportedly completed the implementation phase, while only 2, or 5 percent, of mission-critical systems have done so. State officials agree that the current prioritization process is flawed. In responding to a draft of this report, the department stated that it had recently identified its core business functions and planned to link them to the 64 systems previously identified as mission critical, thereby providing a functional basis for prioritizing their efforts. However, State did not plan to reassess the 198 systems previously identified as nonmission-critical using its new mission-based approach. Without reassessing all of its systems, State will not be able to fully ensure that the most critical functions will not be disrupted by the Year 2000 problem. To mitigate the risk that Year 2000-related problems will disrupt operations, our guide on business continuity and contingency planning recommends that agencies perform risk assessments and develop realistic contingency plans during the assessment phase to ensure the continuity of critical operations and business processes. Contingency plans are vital because they identify the manual or other fallback procedures to be employed should systems miss their Year 2000 deadline or fail unexpectedly. These plans also define the specific conditions that will cause their activation. State has directed its bureaus to develop written contingency plans for all mission-critical systems. At the time of our review, State reported that 16 written plans had been prepared, covering less than half of the 40 systems State identified as mission-critical and noncompliant. However, State was able to provide us with only six of these plans. These plans included only brief risk assessments and summary statements about possible alternate approaches for providing system functionality. They did not discuss the impact of the failure of system functionality on State’s mission. Furthermore, State’s contingency planning is insufficient because it has not focused on ensuring the continuity of department operations and business processes. As noted in our Contingency Planning Guide, the risk of failure is not limited to an organization’s internal information systems. Many federal agencies also depend on information and data provided by their business partners—including other federal agencies, state and local agencies, international organizations, and private sector entities. In addition, they depend on services provided by the public infrastructure—including power, water, transportation, and voice and data telecommunications. Because of these risks, agencies must not limit their contingency planning effort to the risks posed by the Year 2000-induced failures on internal information systems. Rather, they must include the potential Year 2000 failures of others, including business partners and infrastructure service providers. By focusing only on its internal systems, State will not be able to protect itself against major disruptions of business operations. In its May 1998 quarterly report to OMB on the status of its Year 2000 program, State acknowledged that its contingency planning efforts to date have focused on information technology systems rather than on the “larger picture of continuity of business operations.” To strengthen contingency planning, State has established a business continuity work group which includes members from the Year 2000 Steering Committee and is chaired by the Under Secretary for Management. This group is responsible for the development of business continuation strategies for Year 2000 risks. State has not identified a deadline for this group to complete its work. State systems interface with each other as well as with systems belonging to other federal agencies and international entities as shown in the following examples. State’s central messaging system, which is used to transmit official diplomatic cables to overseas posts and other U.S. sites worldwide, interfaces with the Department of Defense. State’s central personnel system interfaces with its payroll system to support payroll processing functions. State’s CLASS system receives data on persons wanted for, or convicted of, drug-related crimes from the Drug Enforcement Agency’s (DEA) Lookout System. As a result, it is essential that State ensure that all of its interfaces are Year 2000 compliant and that noncompliant interfacing partners will not introduce Year 2000-related errors into compliant State systems. Our Year 2000 Assessment Guide recommends that agreements with interface partners be initiated during the assessment phase to determine how and when interface conflicts will be resolved. State has not managed the identification and correction of its interfaces effectively. First, it is still in the process of identifying its interfaces, even though our Year 2000 Assessment Guide recommended that this be done during the assessment phase. At the time of our review, State had identified 12 interfaces between mission-critical and external systems belonging to State and other agencies and organizations and 28 internal interfaces between bureaus that are affected by the Year 2000 problem. In addition, in June 1998, State reported to the President’s Council on Year 2000 Conversion that it maintained interfaces with commercial banks in 157 countries. According to State, 17 percent of its overseas accounts were Year 2000 compliant, 48 percent were scheduled to be compliant by December 1998, 7 percent in March 1999, 3 percent in June 1999, and 22 percent in December 1999. Three percent of the accounts were reported as having inadequate compliance plans. However, State recently acknowledged that it could not identify every interface with other agencies or among the bureaus or verify whether all system owners were reporting on their interfaces or reporting correctly. State is now in the process of identifying these interfaces and verifying their progress. Second, State has made little progress in developing agreements with its interface partners, which our Year 2000 Assessment Guide also recommended be done in the assessment phase in order to allow enough time for conflicts to be resolved. As of May 1998, State’s bureaus were reporting that Memorandums of Understanding had been completed for only 10 interfaces for systems that it has assessed as mission critical and noncompliant. Until it has agreements in place for the remaining interfaces, State will not have assurance that partners are working to correct interfaces effectively or in a timely manner. Moreover, a May 27, 1998, report listed seven mission-critical systems as having a low degree of preparedness for Year 2000 certification based on the condition of their interfaces. The report also found problems with 20 other mission-critical systems due to interface problems. The effective conduct of State operations hinges on its ability to successfully remediate its mission-critical computer systems before the Year 2000 deadline. While State has taken a number of actions to address this issue, its progress in several critical areas has been inadequate: only 17 of 40 systems that State has designated as mission-critical have completed renovation and it has not yet identified all of its interfaces. Further, if State continues its current approach, which lacks a mission-based perspective, it will risk spending time and resources fixing systems that have little bearing on its overall mission. It will also not be prepared to respond to unforeseen problems and delays. We recommend that the Secretary of State ensure that senior program managers and the Chief Information Officer: (1) Reassess all of State’s systems using the new mission-based approach to identify those systems supporting the most critical business operations. (2) Ensure that systems identified as supporting critical business functions pursuant to recommendation 1 receive priority attention and resources over those systems that do not support critical business functions. (3) Redirect its contingency planning efforts to focus on the core business functions and supporting systems, particularly those supporting systems that are already scheduled to miss the OMB milestone date for implementation. (4) Ensure that the bureaus have identified and corrected interfaces and developed written memorandums of agreement with interface partners. State generally agreed with the conclusions and recommendations in our report. The department noted that it has already begun to respond to our observations and recommendations and that many of the specific concerns we raised have been independently identified by the department’s own consulting firm, KPMG Peat Marwick LLP. Additionally, State provided updated information about its management initiatives to address the Year 2000 problem, stating that it is rapidly implementing corrective measures for the problems cited in our report. While these changes demonstrate increased management awareness and attention to the Year 2000 problem, it will be critical for the department to follow through on these initiatives and ensure that they have a positive impact on the remediation, testing, and implementation of systems. Furthermore, the department noted in its comments that it has recently identified its core business functions and linked these functions to its mission. The department also stated that it planned to link its core business functions to the 64 systems previously identified as mission critical. However, State did not plan to reevaluate the 198 systems previously identified as nonmission-critical. Until State applies its new mission-based perspective to all of its systems, it will not be able to fully ensure that the most critical functions will not be disrupted by the Year 2000 problem. We are providing copies of this letter to the Ranking Minority Members of the Subcommittee on the Departments of Commerce, Justice, State, the Judiciary and Related Agencies, House Committee on Appropriations, and the House Committee on International Relations. We are also sending copies to the Chairmen and Ranking Minority Members of the Senate Special Committee on the Year 2000 Technology Problem, the Subcommittee on Commerce, Justice, and State, the Judiciary, and Related Agencies, Senate Committee on Appropriations, Senate Committee on Governmental Affairs, the Subcommittee on Government Management, Information and Technology, House Committee on Government Reform and Oversight, and the Subcommittee on Civil Service, House Committee on Government Reform and Oversight. We are also sending copies to the Secretary of State, the Director of the Office of Management and Budget, and other interested parties. Copies will be made available to others upon request. If you have any questions on matters discussed in this report, please call me at (202) 512-6240. Major contributors to this report are listed in appendix II. The following are GAO’s comments on the Department of State’s letter dated July 30, 1998. 1. State’s detailed statistical information about its Year 2000 effort is constantly changing as State’s Year 2000 program evolves and remediation efforts progress. The information in our report represents the official figures reported to OMB in May 1998. The figures that State claims are more current are not substantially different from those reported in May 1998 and would not have any significant impact on our findings and recommendations. 2. Our assessment of the relative priority of fixing mission-critical and nonmission-critical systems did not include systems that State had designated as “compliant.” Instead, this assessment is based on the comparative number of noncompliant mission-critical and nonmission-critical systems that have completed the implementation phase. Only 2 (5 percent) of the 40 mission-critical noncompliant systems had been implemented as of May 1998 whereas 31 (21 percent) of the 146 nonmission-critical noncompliant systems had been implemented. We agree that systems currently considered “compliant” may not actually meet criteria for compliance and need to undergo their own, separate certification process. 3. State’s comments indicate that it is still taking a flawed approach to contingency planning. Like the prioritization of systems, contingency planning needs to be a top down rather than a bottom up process. That is, agencies must first identify their core business processes and assess the Year 2000 risk and impact of these processes. Subsequently, they can develop plans for each core business process and infrastructure component. As noted in our Year 2000 Contingency Planning Guide, this approach enables agencies to consider and mitigate risks that extend beyond individual applications or systems. For example, as noted in our report, State depends on information and data provided by other federal agencies, international organizations, and private sector entities. It also depends on services provided by the public infrastructure, including power, water, transportation, and voice and data telecommunications. Neither of these dependencies will be considered if contingency planning is focused on individual internal systems. 4. State provided no evidence of increased identification and awareness of commercial bank interfaces. Neither could the Department identify the number of international interfaces it might have. 5. In May 1998, State reported to OMB that its estimated cost to address its Year 2000 problem was $153 million. In our final report, we have noted that State no longer considers this figure to be accurate. John Deferrari, Assistant Director Frank Deffer, Assistant Director Brian Spencer, Technical Adviser R.E. Canjar, Evaluator-In-Charge Cristina Chaplain, Communications Analyst The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the Department of State's progress in solving its year 2000 computer systems problem, focusing on the: (1) overall status of State's efforts to identify and correct its date-sensitive systems; and (2) appropriateness of State's strategy and actions to correct its year 2000 problems. GAO noted that: (1) State has taken many positive actions to increase awareness, promote sharing of information, and encourage its bureaus to make year 2000 remediation efforts a high priority; (2) however, State's progress in responding to the problem has been slow; (3) for example, of the 40 systems that State identified as mission critical and needing either converting or replacing, only 17 (42.5 percent) have completed renovation; (4) more importantly, until recently, State's year 2000 effort lacked a mission-based perspective, that is, it had not determined its core business functions or linked these functions to its mission or to the support systems necessary to conduct these operations; (5) because the year 2000 problem is primarily a business problem, agencies need to take a business perspective in all aspects of it; that is, they should identify their core business areas and processes and assess the impact of system failures; (6) until it takes these steps, State will not have a good basis for prioritizing its systems for the purposes of correction or developing contingency plans that focus on the continuity of operations; (7) in responding to GAO's draft report, State noted that it has recently determined its core business functions and linked these functions to its mission; (8) it has not yet linked its core business functions to support systems necessary to conduct these operations; (9) State has not been managing the identification and correction of its interfaces effectively; (10) specifically, it is still identifying its interfaces, even though this task should have been completed in the assessment phase, and it has developed written agreements with data exchange partners for only a small portion of its systems; and (11) as a result, State has increased the risk that year 2000 errors will be propagated from one organization's systems to another's. |
Social Security has provided significant income protection for the nation’s women. While women, on average, have lower earnings than men, the program has several features that are advantageous to women. First, unlike lifetime annuities purchased from private insurance companies, Social Security does not reduce women’s benefits to account for the fact that, as a group, they live longer than men. Second, Social Security uses a progressive formula to calculate individual benefits, which replaces a relatively larger proportion of lifetime earnings for people with low earnings than for people with high earnings. Because women typically earn less than men, women’s monthly benefits replace a larger proportion of their earnings. The program also provides benefits to retirees’ dependents—such as spouses, ex-spouses, and survivors—and roughly 99 percent of these benefits go to women. Nevertheless, women receive lower Social Security benefits than men. In December 1997, the average monthly retired worker benefit for women was $662.40 compared to $860.50 for men. This is because Social Security benefits are based primarily on a worker’s lifetime covered earnings, which on average are much lower for women. Although labor market differences between men and women have narrowed over time, the Bureau of Labor Statistics does not project that they will disappear entirely, even in the long term. Thus, women can expect to continue to receive lower average monthly benefits than men, although these differences are partially offset by the presence of spousal benefits. women’s Social Security benefits relative to men’s, since under the current rules Social Security calculates monthly benefits on the basis of lifetime taxable earnings averaged over a worker’s 35 years of highest earnings. Because women generally spend more time out of the labor force than men (primarily for reasons associated with child rearing), they have fewer years of taxable earnings; thus, more years with zero earnings are included in calculating their benefits. Even if women and men had identical annual earnings when they both worked, women’s shorter time spent in the labor force results in lower average lifetime earnings, which in turn leads to lower retirement benefits. In 1993, the average 62-year-old man had worked 36 years, whereas the average 62-year-old woman had worked only 25 years. Almost 60 percent of these 62-year-old men had a full 35 years of covered earnings compared with less than 20 percent of women. A second cause of lower lifetime earnings is women’s lower wage rates. In part, this reflects the fact that women are more likely to work part-time, and part-time workers tend to earn lower wages than full-time workers. However, even if only year-round, full-time male and female workers are compared, the median earnings for women are still less than 75 percent of men’s. The gap narrows when differences in education, years of work experience, age, and other relevant factors are taken into account. The changes contained in various Social Security reform proposals would likely have a disproportionate effect on women. Many reform proposals include provisions that would reduce current benefit levels, for example, reductions in the cost-of-living adjustment and increases in the normal or early retirement ages. Reducing all benefits proportionately would hit hardest those who have little retirement income other than Social Security. Reducing Social Security benefits by, for example, 10 percent would result in a 10-percent reduction in total retirement income for those who have no other source of income but would cause only a 5-percent reduction for those who rely on Social Security for only half their retirement income. Women, especially elderly women, are more likely to rely heavily, if not entirely, on Social Security. Among Social Security beneficiaries aged 65 or older in 1996, about half the married couples, two-thirds of the unmarried men, and three-fourths of the unmarried women (who accounted for almost half of the three groups) relied on Social Security for at least half their retirement income. One-fourth of the unmarried women relied on Social Security for all their retirement income. Other changes could exacerbate existing disadvantages for some women. For example, some proposals would extend the period for computing benefits from 35 years to 38 or 40 years. Because most women do not have even 35 years with covered earnings, increasing the computation period would increase the number of years with zero earnings used in calculating their benefits and, thus, lower their average benefit. The Social Security Administration (SSA) forecasts that fewer than 30 percent of women retiring in 2020 will have 38 years of covered earnings, compared with almost 60 percent of men. SSA estimates that extending the computation period to 38 years would reduce women’s benefits by 3.9 percent, while extending the period to 40 years would reduce their benefits by 6.4 percent. The comparable impact on men from an extension to 38 or 40 years is 3.1 percent and 5.2 percent, respectively. Some reform proposals include a specific provision designed to improve the status of survivors, who are predominantly widows, but simultaneously reduce spousal benefits that generally accrue to women. Under the current system, a retired worker’s spouse who is not entitled to benefits under her own work records will receive a benefit up to 50 percent of her husband’s benefit and a widow will receive up to 100 percent of her deceased husband’s benefit. One proposal would reduce the spousal benefit from 50 percent to 33 percent of the worker’s benefit but would increase the survivor’s benefit to either 75 percent of the couple’s combined benefit or 100 percent of the worker’s benefit, whichever is greater. One-earner couples would receive reduced lifetime benefits because the spousal benefit would be reduced while both the retiree and spouse were alive, but the survivor benefit would remain the same as under current law. Two-earner couples would lose some benefits while both were alive if one spouse was dually entitled, but the survivor would receive higher benefits than under current law. Many reform proposals would fundamentally restructure Social Security by creating retirement accounts that would be owned and managed by individuals. While such accounts can increase benefits for retirees, women on average might not reap the same advantages such an investment could bring to men. As stated earlier, the difference is partly the result of women having shorter work histories and lower earning levels, which suggests they generally will contribute less to these accounts. The difference is also partly the result of differences in investment behavior. Economists have found evidence suggesting that women generally are more risk averse than men in financial decisionmaking. Studies indicate that, compared with men, women might choose a relatively low-risk investment strategy that earns them lower rates of return for their retirement income accounts. Although proponents argue that individual accounts could raise retirement benefits for both sexes, an overly conservative investment strategy could leave women with lower final account balances than men, even if both make the same contributions. Thus, even though women could improve their financial situation under a retirement system that included individual accounts, the gap between the benefits received by men and women could increase. invest less in stocks than men. Our analysis, using different data and focusing on individuals in their prime working and saving years, increases the robustness of this conclusion. By investing less in these riskier assets, women benefit less from the potentially greater rates of return that, in the long run, stocks could generate. At the same time, however, they are not as exposed to large losses from riskier assets. While it is true that in the past U.S. stocks have almost always posted higher returns than less risky assets, there is no guarantee that they will always do so. Some pension specialists believe that information is a critical factor in helping individuals make the most of their retirement investments. Providing investors with information that covers general investment principles and financial planning advice might help both women and men to better manage their investments and close the gap in the average investment returns received by men and women. While employers are not legally required to provide this type of information, many have done so in the case of 401(k) accounts. It is not clear who would provide such information to workers under a restructured Social Security system that included mandatory individual accounts. The nature and extent of such information and education efforts, when combined with the design of related investment options, are likely to help maximize the effectiveness of, and minimize the risk associated with, individual accounts under the Social Security system. How individual account accumulations are paid out will also make a difference in retirement income for many women. Unless otherwise specified, workers could choose to receive their individual account balances at retirement as a lump-sum payment, as some pension plans now allow, to spend as they see fit. If retirees and their spouses do not accurately predict their remaining life spans and consume their account balances too quickly, they may end up with very small incomes late in life. individual accounts and still end up with very different monthly benefits if they were to purchase annuities and if the annuities were based on gender-specific life tables. Insurance companies that sell annuities usually take into account women’s longer life expectancy and either provide a lower monthly benefit to women or charge women more for the same level of benefits given to men. In the case of employer-provided group annuities, gender-neutral life tables must be used in the calculation of monthly benefits, which ensures equal benefits for men and women with the same lifetime earnings. Requirements to use gender-neutral life tables involve cross-subsidies between men and women. Insurance companies also pay lower benefits for a joint and survivor annuity that covers both husband and wife than for a single life annuity that covers only the worker during his or her lifetime—again because the total time in which the benefits are expected to be paid is longer. Women are more likely to receive the survivor portion of this type of annuity, since they are more likely to outlive their husbands. Thus, while the total lifetime annuity benefits for men and women may be similar, the monthly benefit women receive, either as retirees or as survivors, will likely be lower and could result in a lower standard of living in retirement. Other groups of women will also need to be considered if individual accounts are introduced. Under current Social Security provisions, divorced spouses and survivors are entitled to receive benefits based on their former spouse’s complete earnings record if they were married at least 10 years. Most of those receiving benefits under this provision are women. Many individual retirement account proposals do not acknowledge divorcees and survivors as having any specific claim on the individual accounts of their former spouses. Under these proposals, the current automatic provision of these benefits would be eliminated. The money in these accounts could become a part of the settlement at the time of a divorce, but the current benefit guarantee to these benefits might be lost. gender-neutral life tables would create cross-subsidies between men and women. However, doing so could protect retired women against a lower living standard that would result simply because they usually live longer than men. The needs of former spouses will also need to be considered in developing individual accounts. While the Social Security system has benefited women significantly through the spousal benefit and the progressivity of the benefit formula, women generally receive lower Social Security benefits than men because they work fewer years and earn lower wages. These work and earnings characteristics will affect the relative changes in average benefits for men and women under some reform proposals. In particular, these characteristics will work against women should reforms based on years with covered earnings be enacted. Because of women’s longer life expectancy, the creation of mandatory individual retirement accounts could also decrease women’s benefits relative to men’s if women continue to invest more conservatively than men. Women might also be disadvantaged if the accumulations in these accounts are paid as a lump sum rather than as a joint and survivor annuity based on gender-neutral life tables. Whether reforms include relatively modest modifications to the current system or more major restructurings that could include mandatory individual retirement accounts, some elements of the reform proposals could adversely affect many elderly women. Because elderly women are at risk for living in poverty, understanding how various elements of the population will be affected by different changes will be necessary if we are to protect the most vulnerable members of our society. This concludes my prepared statement. I would be happy to answer any questions you or other Members of the Subcommittee might have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed: (1) how women currently fare under social security; (2) how they might be affected by some of the proposed changes in benefits to restore solvency; and (3) how women might fare under a system restructured to include individual accounts. GAO noted that: (1) women have benefited significantly from the social security program; (2) many women who work are advantaged by the progressive benefit formula that provides larger relative benefits to those with lower lifetime earnings; (3) women who did not work or had low lifetime earnings and who were married benefit from the program's spousal and survivor benefit provisions; (4) however, women typically receive lower monthly benefits than men because benefits are based on earnings and the number of years worked; (5) any across-the-board benefit cuts to restore solvency might fall disproportionately on women as a group because they rely more heavily on social security income than men; (6) other types of reform approaches can have positive or negative effects on women depending on how the reforms are designed; (7) restructuring social security to include individual accounts also will likely have different effects on men and women; (8) because women earn less than men, contributions of a fixed percentage of earnings would put less into women's individual retirement accounts; (9) available evidence indicates that women also tend to invest more conservatively than men, and thus would likely earn smaller returns on their accounts, although they would bear less risk; (10) in addition, how such accounts are structured will be extremely important to women; (11) for example, whether individuals will be required to purchase annuities with the proceeds of their accounts at retirement and how the annuities are priced could affect women quite differently from men; and (12) how benefits might be distributed to divorcees and how accounts are transferred to survivors could affect the retirement income of some elderly women. |
Contingency operations can encompass a number of potentially dangerous or uncertain environments, which increase risks to federal agencies and personnel, including contractors that support those agencies. The U.S. Code defines a contingency operation, in relevant part, as a military operation designated by the Secretary of Defense in which armed forces are or may become involved in military actions, operations, or hostilities against an enemy of the United States or results in the call to active duty members of the uniformed services, such as in Iraq and Afghanistan.fall within this definition but potentially pose similar challenges. However, there are other environments that do not USAID officials noted that its personnel operate in challenging environments on a daily basis and that the agency uses various terms to describe these environments. For example, USAID’s programming policy defines high-threat environment as a country, city, area, subregion, or region in which USAID is hindered in accomplishing its mission due to security risks such as (1) specific targeting of U.S. interests, (2) a favorable operating environment for terrorist groups, (3) intelligence indicating that a threat is imminent, or (4) other significant risk as identified by various security offices. State operates in similar environments. For example, in its 2010 Quadrennial Diplomacy and Development Review, State noted that more than 25 percent of State and USAID personnel serve in 30 countries classified as high risk for conflict and instability, including Yemen, Democratic Republic of Congo, Sudan, and Kyrgyzstan. For the purposes of this report, we use the term “contingency operations” to encompass the range of potentially dangerous or uncertain environments in which USAID and State operate. Our work over the past 5 years, as well as that of others in the accountability community, has found that State and USAID have experienced systemic challenges that have hindered their ability to manage and oversee contracts in contingency environments. In our reports issued annually between 2008 and 2011, we consistently found that State and USAID lacked reliable data and systems to report on their contracts and contractor personnel in Iraq and Afghanistan. Having reliable data on contracts and contractor personnel is a starting point for informing agency decisions and ensuring proper management. In 2012, we reported that the agencies had made improvements to their contractor personnel data and related systems that could result in more reliable data, but data comparability across years and agencies was limited. We also reported in 2012 that State had not assessed the skills and workforce mix needed to meet future contracting requirements in Iraq and Afghanistan. Further, we found that State had insufficient personnel with the necessary expertise to conduct acquisition planning and oversight functions to support the department’s mission in Iraq, and, as a result, relied on the Department of Defense (DOD) for acquisition support. We also found weaknesses in USAID’s oversight and monitoring of project and program performance in Afghanistan. Specifically, we reported that USAID did not consistently follow its established performance management and evaluation procedures for Afghanistan agriculture and USAID subsequently issued new guidance, approved a water projects.new performance management plan, and took steps to improve its third- party monitoring of projects. Similarly, others in the accountability community have reported on the acquisition challenges faced by State and USAID in contingency environments. For example, Inspectors General at both agencies identified aspects of contracting in contingency environments as a serious management challenge in the agencies’ fiscal year 2012 financial reports. State’s Inspector General found that the department continued to face challenges managing contracts and procurements, and reported that it has identified instances in Iraq and Pakistan in which poor contract monitoring resulted in increased costs and poor performance. Similarly, USAID’s Inspector General identified USAID’s work in high-threat environments such as Afghanistan, Pakistan, Iraq, and South Sudan, as one of the agency’s most serious management challenges, in part due to audits that disclosed weak contract management in these environments. In addition, the Special Inspector General for Iraq Reconstruction and the Special Inspector General for Afghanistan Reconstruction have reported on similar contracting issues. Further, the statutorily established Commission on Wartime Contracting in Iraq and Afghanistan made a number of recommendations directed to State and USAID in its final report, issued in 2011. Recommendations include those related to using risk factors to decide what functions are appropriate to contract for in contingency settings, ensuring the government can provide sufficient acquisition management and contractor oversight, and taking actions to mitigate the threat of additional waste due to a lack of sustainment by host governments. Section 850(a) of the fiscal year 2013 NDAA mandated that State and USAID submit, to the appropriate Congressional committees, assessments of agency policies governing contract support in overseas Section 850(a) required State and USAID to contingency operations.submit their reports to Congress by July 2, 2013. Section 850(b) of the NDAA provided that the reports 1. Describe and assess the roles and responsibilities of officials and offices with contract-related responsibilities in overseas contingency operations; 2. Include procedures and processes associated with eight elements of contracting, including collection, inventory, and reporting of data; acquisition planning; solicitation and award of contracts; requirements development and management; contract tracking and oversight; performance evaluations; risk management; and interagency coordination and transition planning; and 3. Include strategies and improvements necessary to address workforce issues in overseas contingencies, including reliance on contractors. State and USAID submitted their reports to Congress on June 25 and July 1, 2013, respectively. State and USAID identified a number of actions needed to improve contracting in contingency environments. Overall, the actions identified by State and USAID may help position each agency to better support future contingency operations, but many of the identified changes are new or have not been fully implemented and the agencies generally have not established plans to assess the impact of the changes. Federal internal control standards highlight the importance of managers comparing actual performance to expected results. For example, in October 2013, State’s Under Secretary for Management approved a series of actions, which included changing the organizational structure by establishing a new staff unit to oversee elements of contracting in overseas contingency operations, a new contract risk mitigation effort, and the creation of specific contingency contracting policy in its Foreign Affairs Manual. In some instances, such as establishing the new risk mitigation staff, State intends to assess the impact of the initiatives. According to State officials, they are still developing specific plans and time frames to implement several other initiatives. We found that State has not indicated whether or how it intends to assess the impact of these other initiatives. Continued management attention is needed to ensure that these efforts achieve their intended objectives. USAID identified several actions needed to improve areas such as the collection, inventory, and reporting of data; contractor performance evaluations; and risk management. In response to long-standing challenges that the agency faces in implementing and monitoring activities in high-threat environments, USAID established a nonpermissive environment working group in October 2013 to develop lessons learned, toolkits, and training, but it is not expected to complete this effort until the end of September 2014. We found that USAID missions and offices with responsibilities for responding to contingencies have established procedures and practices, but USAID did not consider whether these procedures should be institutionalized agency-wide. USAID officials explained that they took a narrow view of the mandate and focused their assessment on agency-wide policies. As a result, USAID may have missed opportunities to leverage its institutional knowledge to better support future contingencies. State’s Section 850 report to Congress concluded that its organizational structure was generally adequate to support overseas contingency operations and identified several actions needed to improve areas such as acquisition planning, contract oversight, and interagency coordination (see table 1). In addition, our review identified other contingency contracting related actions taken by State, including those identified after State submitted its report to Congress. To identify needed changes, State established working groups comprised of key officials from offices across the department to assess State’s organizational structure and contracting procedures in overseas contingency operations. In October 2013, State’s Under Secretary for Management approved a series of recommendations proposed by the working groups, which included establishing a new staff unit to oversee elements of contracting in overseas contingency operations and other critical environments, as well as the creation of a specific contingency contracting chapter in the Foreign Affairs Manual, which contains the organizational responsibilities and authorities of each of the major components of the department. According to State officials, they are still developing specific plans and time frames to implement many of the changes approved in October, but generally have not developed plans to assess the impact of these initiatives. State officials subsequently reported that the Critical Environment Contract Analytics Staff, which was formally established in December 2013, will be responsible for using metrics to assess the effectiveness and performance of State’s planned initiatives. State determined that its centralized organizational structure was generally effective and efficient for contract support in overseas contingency operations. To improve management and oversight of contract performance of major contracts in Iraq, State established a new regional Contract Management Office in August 2013. State plans to review the effectiveness of the Contract Management Office after it has been operational for 1 year to determine gaps and applicability as a department-wide model for contracting in future contingency environments. State officials noted that the office, currently located in Iraq, will move to Kuwait in mid-2014 to allow for administrative support for other contracts in the region as resources are available. Appendix II describes the roles and responsibilities of State’s contracting organization used to support contingency operations. With regard to tracking of contractors and contractor personnel, State has previously taken steps to improve the collection, inventory, and reporting of contractor and contractor personnel data within DOD’s Synchronized Predeployment and Operational Tracker (SPOT) database for overseas contingency operations. In 2008, Congress required DOD, State, and USAID to begin tracking use of contractors in Iraq and Afghanistan. To do so, the agencies agreed to use DOD’s SPOT database. To implement this requirement, State issued a procurement information bulletin in March 2008, which directed State contracting officers (CO) to designate the use of SPOT for all applicable contracts in Iraq and Afghanistan and include a clause requiring contractors to provide certain data within SPOT. However, we consistently found in reports issued annually since 2008 that State lacked reliable sources and methods to report on its contracts and contractor personnel in Iraq and Afghanistan. According to State officials, to improve State’s reporting, the Office of Acquisitions Management developed additional guidance in fall 2012 which outlined the process of inputting contracts, how contractors should enter contractor personnel, how to request letters of authorization and approvals, and how to enter data through contract close-out. The new regional Contract Management Office will be responsible for overseeing contractors’ input of Iraq contractors’ data into SPOT. State’s Section 850 report noted that the use of a centralized acquisition office in Washington D.C., and two regional procurement support offices help solicit and award contingency contracts. State officials noted that approximately 90 percent of its acquisition dollars and all major programs are managed by either its Office of Acquisitions Management or its two regional procurement support offices in Florida and Germany. Further, it noted that contracts for contingency operations in Iraq and Afghanistan are managed by its Office of Acquisitions Management. At individual missions and posts, State’s general services officers have contracting authority limited to noncomplex transactions below $250,000. State noted that this approach provided sufficient support in terms of soliciting and awarding contracts. State’s Section 850 report noted that contract requirements were generally developed either at State’s in-country posts or its functional bureaus, and stated that its requirements development process was adequate. To improve acquisition planning, State developed a draft update to the Foreign Affairs Manual that explains that acquisition planning for contingency operations requires special attention and has designated staff to facilitate planning in contingency environments. In addition, State issued standard operating procedures in May 2013 for support cells that will be coordinated on an as-needed basis at the start of a contingency. The support cell will assist in the opening or reopening of posts based on lessons learned and will be located within the regional bureau. The support cell process was incorporated into the Foreign Affairs Manual and includes a typical support cell organization chart, a work process map documenting the steps that determine a situational plan of action, and a checklist document that can be used when creating an operational plan to open or reopen a post. According to State officials, the new procedures were used in 2013 to establish a support cell for a potential contingency in Syria. To carry out contract oversight, State generally relies on contracting officer’s representatives (COR) appointed from programs within the department’s regional and functional bureaus to help ensure that the contractor accomplishes the required work. State’s Section 850 report noted, however, that State lacks CORs with certain technical subject matter expertise for contingency operations, such as medical and aviation. Additionally, State officials noted the need to improve training for CORs and cited several actions the department has taken to do so. For example, according to State officials, the department has revised COR training to be more scenario-based and initiated an effort to better manage the COR function across the department due to the growth in the number of CORs in areas such as Afghanistan and Pakistan. In March 2013, State reinstituted a COR council to collect agency COR data and to develop plans and actions that improve the effectiveness of the COR function. State officials told us that they intend to use the Federal Acquisition Institute Training Application System to track COR certifications and allow COs and bureaus to identify qualified CORs when needed. In October 2013, State proposed a revision to the Foreign Affairs Manual that would assign responsibility to the individual bureaus to define specialized training or experience requirements for CORs to ensure effective contract oversight. State reported that staffing the newly established regional Contract Management Office in Iraq with specific technical skills and identifying individuals with previous COR experience represents a continuing challenge that they will try to address through training and mentoring, among other things. In prior reports, we found that State had taken actions to address the challenges it encountered in overseeing private security contractors in In July 2013, the Under Secretary for Management testified that Iraq.private security contractors are critical to State’s readiness and capability to carry out U.S. foreign policy under dangerous and uncertain security conditions. The Under Secretary emphasized that maintaining this capability is particularly important when the department is taking on expanding missions in contingency operations or areas that are transitioning from periods of intense conflict, such as in Iraq and Afghanistan. State officials noted that the department continues to improve its program for private security contractors in contingency operations. For example, the department revised its training requirements and issued contractor standards of conduct to ensure the professionalism of private security contractors. In addition, State’s draft update to the Foreign Affairs Manual outlines the roles and responsibilities and training standards of private security contractors. Officials expect revisions to the Foreign Affairs Manual will be approved in 2014. State’s Section 850 report noted that reporting contractor performance evaluations in the government-wide Contractor Performance Assessment Reporting System (CPARS) needed improvement. The Federal Acquisition Regulation (FAR) requires agencies to prepare an evaluation of contractor performance for each contract that exceeds the simplified acquisition threshold at the time the work is completed and gives agencies discretion to include interim evaluations for contracts with a performance period exceeding 1 year. To do so, State has started several initiatives, including more dedicated staff time to manage reporting, monthly meetings held by the Office of Acquisitions Management and the Office of the Procurement Executive to track reporting progress, and increased training on past performance reporting and evaluations. Further, the Office of the Procurement Executive is working with the human resources division to incorporate work elements into CORs’ performance appraisals. State officials indicated that they have established a goal that contractor past performance reporting will be at a 45-percent completion rate by December 31, 2013.intends to incorporate a requirement within its Foreign Affairs Manual for State personnel to conduct contractor program reviews of contracts over $25 million for contingency operations at least semi-annually, and document them in the government-wide database. In December 2013, State established the Critical Environment Contract Analytics Staff to centrally coordinate and perform, among other things, contract risk assessments and mitigation plans. As outlined in the October 2013 action memorandum, this group will be responsible for coordinating with other U.S. government agencies and monitoring contracting readiness in critical environments. The group also will be expected to expand the contract risk assessments in Afghanistan and other high-threat, high-risk posts, such as locations with major multibureau contracts. According to State officials, the Critical Environment Contract Analytics Staff will be comprised of three individuals and as of January 2014, staffing and recruitment efforts were underway. In 2011, we found that State had not developed a process to vet contractor firms in Afghanistan and recommended that State assess the need to vet non-U.S. vendors to ensure that resources are not diverted to insurgent or criminal groups.program in October 2012 to vet contractors and grantees for links to terrorists or their supporters. The pilot was designed to assess risk and test the utility of vetting across the range of department operations and risk profiles. The department works in tandem with regional posts and USAID on the current pilot program, which includes Lebanon, Guatemala, Kenya, Ukraine, and the Philippines. In addition to these countries, Afghanistan and Syria have been included to reflect vetting in high-risk countries. The pilot was scheduled to be completed in October 2013, but State officials stated that it will be extended and that the completion date has not been determined. State noted that as of August 2013, preliminary results suggest that the effectiveness of vetting is related to the prevalence of conflict, but that few data are available from the five pilot countries due to the limited number of contracts and grants in those countries. State’s Section 850 report noted that interagency coordination was an area that needed improvement. In September 2010, DOD and State established an Executive Steering Group, co-chaired by the DOD’s Deputy Assistant Secretary for Program Support and State’s Deputy Assistant Secretary for Logistics Management, to help State identify critical requirements in Iraq for which it had previously relied on DOD and address other issues associated with the transition. State reported that ad hoc working groups, such as the Executive Steering Group, provide timely coordination on specific implementation issues in contingency environments. In addition, officials told us in September 2013 that the Iraq contract transition working group continued to host biweekly meetings between State and DOD to address requirements and post-transition activity in Iraq. An Executive Steering Group modeled after the group in Iraq was also established for Afghanistan in May 2011. In addition, State established the Management Transition Office in Kabul in June 2011 to help with the transition planning in Afghanistan for a post-2014 presence. State officials have noted that they are less dependent on DOD for services in Afghanistan, but they are taking lessons learned from Iraq and implementing them as appropriate. According to State officials, no agreements have been finalized on requirements for the post-2014 presence as they are waiting for drawdown and security plans to be finalized. In addition to the groups already established in Iraq and Afghanistan, State reported that it will coordinate with other agencies at the outset of future contingencies to define which working groups should be initiated. In addition, the newly approved contract risk assessment and mitigation staff described above will act as the coordinator for interagency acquisition agreements during contingency operations. The FAR requires, among other things, that agencies carefully consider whether an interagency acquisition is based on a sound business decision and formally document the terms and conditions in an interagency agreement. In August 2012, we found that State and DOD did not comply with requirements for use and management of assisted interagency acquisitions. To respond to recommendations we made in this 2012 report, State officials indicated that the department is working with DOD to enhance a database used to store information on joint interagency agreements. For any existing interagency agreements, the department continues to work with DOD to identify where any required justifications are not already existent and rectify them in accordance with FAR requirements. Additionally, in response to previous recommendations, State issued a procurement information bulletin in January 2013 defining the process and requiring the use of interagency agreements. We also previously found that interagency coordination in Iraq began late, which caused State to have limited insights into its use of interagency acquisitions and hindered contract oversight. State officials told us that they want to focus on longer-term continuity by institutionalizing interagency agreements before determining how they would handle interagency coordination and transition planning for future contingencies. State intends to assess the impact of some initiatives, such as establishing the regional Contract Management Office and the contractor vetting pilot program. According to State officials, they are still developing specific plans and time frames to implement many of the changes discussed above. We found that State has not indicated whether or how it intends to assess the impact of some initiatives. For example, as outlined in the October 2013 action memorandum, State plans to establish a new staff to centrally coordinate and assist in managing contract risk assessments and mitigation plans but has not yet determined how to assess the impact of the office. Federal Internal Control Standards highlight the importance of reviews by management at the functional level to compare actual performance to planned or expected results and Accordingly, without management analyze significant differences.reviews to assess planned and actual performance, State may not be able to determine whether these initiatives better enable it to support future contingency operations. State officials subsequently reported that the Critical Environment Contract Analytics Staff will be responsible for assessing the effectiveness and performance of the planned initiatives. USAID concluded that its organizational structure was adequate to support contingency contracting efforts, but identified several actions needed to improve areas such as collection, inventory, and reporting of data; contractor performance evaluations; and risk management (see table 2). Our review also identified other contingency contracting-related actions taken by USAID that were not included in its report to Congress or were identified after USAID submitted its report to Congress. To develop its Section 850 report, USAID officials told us that USAID’s Office of Acquisition and Assistance in the Bureau for Management (M/OAA)—the office responsible for developing, issuing, and maintaining the agency’s acquisition regulations, procedures, and standards—was tasked with assessing agency-wide acquisition policies, such as its Automated Directives System and the FAR, to determine if new policies or changes to existing policies or procedures were needed to improve contracting in overseas contingency operations. Section 850 required USAID to assess its policies and procedures related to contract support of contingency operations. USAID officials said that they took a narrow view of the mandate and did not include operational procedures from missions that have played key roles in contingency operations in their assessment because they interpreted the legislative requirement to include only agency policy rather than operational procedures developed by individual missions. Further, according to agency officials, the Office of Foreign Disaster Assistance and the Office of Transition Initiatives within USAID’s Bureau for Democracy Conflict and Humanitarian Assistance did not participate in developing USAID’s Section 850 report as these offices do not create contracting policy. However, officials from these two organizations noted that they are often the first to respond to disasters or contingencies and play a key role in successfully transitioning from short- term, quick-impact interventions to longer-term traditional development programming. For example, working in-country the Office of Transition Initiatives designs acquisition instruments and develops relationships with implementing partners—information that could benefit the mission if leveraged. As a result, USAID may have missed opportunities to leverage its institutional knowledge to better support future contingencies. In response to long-standing challenges that the agency faces in implementing and monitoring activities in high-threat environments, however, USAID established a nonpermissive environment working group in October 2013. By September 2014, the working group plans to develop a compendium of best practices and lessons learned for implementing and monitoring projects in nonpermissive environments; an operations security toolkit that will include tools for enhanced monitoring, and possibly a field information technology support package; and a targeted set of training and learning tools that focus on how USAID prepares staff for managing risks inherent in working in overseas contingency environments. This working group affords USAID another opportunity to leverage its institutional knowledge, such as that residing at its missions and other offices with contingency contracting related responsibilities. USAID described the offices that support contingency contracting in its Section 850 report and identified that its decentralized organizational structure for contracting activities is an effective and efficient model for overseas contingency operations. USAID delegates authority to heads of USAID contracting activities to carry out the programs and activities for which they are responsible—including execution of contracts and the establishment of procurement policies, procedures, and standards appropriate for their programs and activities, subject to government-wide and USAID regulations and policy. USAID reported that the agency works on a daily basis in countries characterized by many of the same conditions found in contingency operations; therefore, USAID concluded that its existing organizational structure for contracting activities can be easily applied, when necessary, in contingency operations. Further, USAID reported that its decentralized model gives staff flexibility in addressing issues that arise in contingency operations. Appendix III describes the roles and responsibilities of USAID’s contracting organization used to support contingency operations. USAID is in the early stages of developing a proposal to use SPOT solely as a tool to track contractor personnel in contingency environments rather than the number and value of contracts. USAID officials stated that other data systems, such as the Federal Procurement Data System–Next Generation and its Global Acquisition and Assistance System, provide more reliable information on the number and value of contracts. USAID plans to present this proposal to DOD and State for their consideration. With regard to solicitation and award of contracts, USAID reported that its contracting writing system, the Global Acquisition and Assistance System, has been deployed worldwide and will allow personnel to write, manage, oversee, and report on USAID awards from any location. According to USAID, 80 percent of USAID-obligated funds are now managed through the Global Acquisition and Assistance System. USAID did not identify any additional changes needed to its requirements development or acquisition planning processes for contingency operations in its Section 850 report. USAID’s missions and bureaus are responsible for establishing requirements prior to contract award and preparing a written acquisition plan that defines these requirements, if necessary. We previously identified that written acquisition plans, requirements development, cost estimating, incorporating lessons learned, and allowing sufficient time to conduct acquisition planning are several important elements of successful acquisition planning. In 2011, GAO found that USAID did not require written acquisition plans for individual contracts. We recommended that USAID establish requirements for written acquisition plans and enhance guidance for lessons learned in acquisition planning, among other things. In response to our recommendation, in April 2013 USAID finalized its acquisition planning chapter in the Automated Directives System, which provides the agency’s policy directives, required procedures, and internal guidance for the planning of USAID direct acquisition and assistance activities, including requirements for preparing written acquisition plans for individual contracts. However, we found that the policy does not require a discussion of lessons learned, including insights on the performance of the contract and any issues the program may have encountered. While USAID did not identify contract oversight as an area needing improvement in its Section 850 report, USAID reported that it has completed or is taking steps at the mission level to improve contract oversight in overseas contingency operations. For example: In Iraq, USAID reported that it made a number of management changes to meet mission needs, including modifying contracts to include more stringent reporting requirements. USAID also increased the number of CORs in-country and provided them with additional guidance on ensuring compliance related to reporting. As the U.S. military presence draws down in Afghanistan, USAID officials acknowledged that they may be challenged to adequately monitor project progress. To address this concern, USAID created on- site monitors at project sites to devolve project monitoring responsibilities to USAID personnel in the five regional commands in September 2010. In addition, USAID/Afghanistan provided training for on-site monitors on the acquisition process. According to M/OAA officials in Kabul, the USAID mission in Afghanistan recently established support units for contracting activities. For example, mission officials told us that they established a compliance division approximately 1 year ago to ensure that implementing partners and M/OAA are in compliance with USAID acquisition policy. Further, USAID officials told us that the mission in Afghanistan staffed a contract management team in 2012 to support COs by tracking audit recommendations and ensuring implementation and performing contract closeout. In addition, USAID is planning to implement a remote monitoring program in Afghanistan that will verify project performance through individuals hired by the contractor to verify activities that implementing partners have completed at project sites. This initiative will be composed of a set of monitoring methods that will be used to verify project performance, including third-party monitors, Global Positioning System tracking, photography, and data collections with mobile devices, among other methods. To implement this initiative, USAID issued a draft request for proposals in May 2013 publicizing its intent to negotiate up to as many as three contracts. According to USAID officials, as of October 2013, the agency is in the process of finalizing the request for proposals. The USAID mission in Pakistan reported that it has developed several monitoring and evaluation mechanisms that are especially useful in geographically remote areas where USAID staff have limited access due to security restrictions. For example, USAID contracts with several independent local contractors to monitor implementation in insecure areas. In May 2013, the Inspector General recommended that the USAID mission in Pakistan implement a mission‐wide monitoring and evaluation plan to cover all aspects of mission programs. USAID/Pakistan concurred with this recommendation. USAID reported that it has identified increasing the submission of contractor performance evaluations in CPARS as one of the agency’s highest acquisition priorities. To do so, M/OAA has established quarterly targets for reporting in the contractor past performance database to measure its progress in meeting the agency’s 65-percent reporting goal for fiscal year 2013 and embarked on a communications and training effort for COs and CORs. USAID has taken actions to address contracting risks in contingency environments, including reliance on contractors and the risk of terrorist financing. For example, USAID’s Bureau for Policy, Planning and Learning revised the agency’s planning policy to incorporate a risk assessment requirement for using contractor support in overseas contingency operations as required by the 2013 NDAA. According to agency officials, they expect the draft revised policy will be finalized in January 2014. Further, in 2011, we found that USAID’s vendor vetting process in Afghanistan faced limitations and recommended that USAID consider formalizing a risk-based approach that would enable them to identify and vet the highest-risk vendors and partners. USAID concurred with the recommendation and in April 2013 USAID completed deployment of its Partner Vetting System—a centralized database used to support the vetting of individuals—to decrease the risk of terrorist financing, and is preparing templates and implementation guidance for COs. In addition, individual USAID missions develop and implement operational procedures as necessary to address environment-specific risks. For example, USAID reported that, in Afghanistan, USAID launched the Accountable Assistance for Afghanistan initiative to further protect taxpayer dollars from being diverted from their development purpose in the fall of 2010. The initiative consists of several components, including limiting the number of layers of subcontracts and financial controls such as auditing all locally incurred costs and ensuring close review of contractor claims prior to payment. USAID’s Office of Afghanistan and Pakistan Affairs represents USAID in interagency discussions related to the contingency operation in Afghanistan. Further, the Office of Afghanistan and Pakistan Affairs provides support to the USAID mission in Afghanistan so that it may provide input to key interagency stakeholders. While USAID did not identify interagency coordination as an area needing improvement in its Section 850 report, according to USAID officials the Office of Afghanistan and Pakistan Affairs details staff on an ongoing basis to State and DOD offices to strengthen interagency coordination. Further, according to USAID officials, the Office of Afghanistan and Pakistan Affairs and the USAID mission in Afghanistan have conducted extensive planning regarding management and oversight of procurements in light of the upcoming transition from a DOD to a State-led presence in Afghanistan. Principal procurement management responsibilities are expected to remain with USAID staff based in Kabul, in consultation with Washington, D.C.-based staff. Since 2011, State and USAID have increased their overall acquisition workforces and are in various stages of assessing their workforce needs for overseas contingency operations. Per Office of Management and Budget (OMB) guidance, both agencies identified competency and skill gaps for their acquisition workforces in their 2013 acquisition human capital plans. State’s 2013 plan noted that, in response to growth in contracting activity in areas such as Iraq and Afghanistan, additional acquisition personnel were needed. In October 2013, State’s Under Secretary for Management approved the formation of a multibureau working group that plans to further explore workforce needs for current and future contingency operations. USAID launched a program in 2008 that has worked to rebuild a cadre of contracting officers and USAID’s 2013 plan cited providing training for a young acquisition workforce as the agency’s greatest challenge. State noted in its Section 850 report that it will increase its focus on conducting risk assessments on the reliance, use, and oversight of contractors through the establishment of risk management staff. USAID’s Section 850 report did not address reliance on contractors, but in October 2013, USAID drafted a revision to its planning policy to require a risk assessment and mitigation plan associated with contractor performance of critical functions in overseas contingency operations. Since 2011, both State and USAID have increased the size of their acquisition workforces. State reported that its workforce has grown by 53 percent, while USAID’s workforce grew by about 15 percent (see table 3). State’s 2013 acquisition human capital plan noted that, in response to growth in contracting activity in areas such as Iraq and Afghanistan, additional acquisition personnel were needed. To increase their acquisition workforce, State officials told us that they realigned and designated existing staff as acquisition personnel and undertook limited hiring. Further, State officials noted that the department’s acquisition workforce growth was in part aided by the use of its working capital fund, which is generated through a 1-percent fee on all procurements. In 2012, we found that State had not assessed the extent to which the working capital fund has helped State surge its workforce to meet requirements for Iraq and Afghanistan. While USAID also reported increasing its overall acquisition workforce from fiscal year 2011 to 2013, USAID officials told us that they experienced a decrease in 2013 due to a number of factors, including a temporary loss in their direct hiring authority for hard-to-fill positions and a slowdown in hiring due to budget cuts as a result of sequestration. USAID officials stated that they anticipate that the number of acquisition personnel will increase in future years as personnel are hired to fill vacant positions. USAID officials do not anticipate an increase in the total number of authorized positions at this time. Responsible for ensuring the technical and administrative functions of the contract, CORs are an integral part of the acquisition workforce. USAID officials reported that their Global Acquisition and Assistance System identified 3,629 personnel in fiscal year 2013 who have been designated to serve as a COR since 2008. However, USAID officials cautioned that they are continuing to improve the reliability of the data and that relying on the number of employees who have been certified as a COR may overstate the COR workforce at USAID. Further, USAID officials explained that the number of CORs reported by Global Acquisition and Assistance System who have been designated to serve as a COR on a specific award—which totaled about 1,116 as of November 2013—may be a more accurate representation of USAID’s COR workforce. State officials told us that 2,367 CORs were certified in fiscal year 2013. State and USAID are in various stages of assessing their workforce needs for overseas contingency operations and have identified skills and competency gaps in their acquisition workforces. OMB’s 2009 memorandum, Acquisition Workforce Development Strategic Plan for Civilian Agencies—FY 2010-2014, requires agencies to develop annual acquisition workforce human capital plans that identify strategies and goals for increasing the capacity and capability of the acquisition workforce. For example, the plan is to include recruitment and retention strategies for obtaining the acquisition workforce resources and skills required to meet future agency mission needs. We also identified leading practices agencies should follow when developing workforce plans, including determining the occupations, skills, and competencies that are critical to achieving their missions and goals, as well as identifying any gaps between their current workforce and the workforce they will need in the future. State’s March 2013 acquisition human capital plan provided information on identified skill and competency gaps in its acquisition workforce, as well as agency plans to address them. For example, State’s report identified that its contracting professionals had strong technical skills in contract award and administration, but noted weakness in acquisition planning, among other areas. State reported that it is focusing on pairing interns and recently hired staff with senior staff and mentoring them in the competencies that need additional attention. State is also continuing to use internal expertise and resources to provide in-house training sessions to supplement external training. In October 2013, State’s Under Secretary for Management approved the formation of a multibureau working group that plans to explore workforce needs for current and future contingency operations. Specifically, the working group will assess the existing COR structure, further analyze skill gaps to meet the department’s demands for surge personnel, and develop expanded legislative authority for hiring personal services contractors in contingencies. For example, the group plans to review the feasibility of special pay incentives for critically needed COR skills and study whether a new career track for CORs with specialized training is needed. State did not identify when the working group is expected to complete its efforts. OMB’s July 2009 multisector workforce guidance directed agencies to determine the best mix of skills of contractors and federal employees and the appropriate workforce size for the agency. To do so, agencies were directed to conduct a pilot human capital analysis of at least one program where the agency has concerns about the extent of reliance on contractors and to adopt a framework for planning and managing a multisector workforce that is built on strong strategic human capital planning. According to State officials, in response to OMB’s guidance, they created a multisector workforce methodology, conducted two pilot programs, and realized efficiencies and cost savings. In August 2012, we found that State had not fully assessed whether its effort to increase its workforce was sufficient to meet requirements; whether it had the proper skills; whether it had the appropriate mix of government/contractor personnel; or whether it had sufficient numbers of qualified oversight personnel to support its future acquisition efforts in Iraq and Afghanistan. We recommended that State assess the extent to which the current acquisition workforce, both government and contracting personnel, meets its needs for acquiring goods and services in complex environments such as Iraq and Afghanistan. State concurred with this recommendation and, in September 2013, State officials noted that they are still considering options to address this recommendation, such as relying on the resources of an Afghanistan interagency coordination group to determine skill sets needed. Further, State noted in its Section 850 report that it will increase its focus on conducting risk assessments on the reliance, use, and oversight of contractors through the establishment of risk management staff. In addition, State issued an updated workforce policy on October 1, 2013, that directed bureaus on a regular basis to consider using federal employees to perform new or expanded functions performed by contractors. Similar to State, USAID’s 2013 acquisition human capital plan provided information on identified skill and competency gaps in its acquisition workforce. For example, USAID’s report identified that its contracting professionals had strong technical skills in contract administration and proposal evaluation, but noted weaknesses with acquisition planning, among other things. Similarly, USAID reported that its CORs had strong skills in project management and contract administration and weaknesses in market research and acquisition planning. In response, USAID developed training to improve COR technical skills through scenario- based learning. In addition, USAID launched an e-based resource in March 2013 that provides tools to guide personnel through the procurement process; the resource includes references for CORs at each stage of the contract award process. USAID’s 2013 acquisition human capital plan cited providing training for a young acquisition workforce as the agency’s greatest challenge and USAID launched a program in 2008 that has worked to rebuild a cadre of contracting officers. According to USAID’s 2013 acquisition human capital plan, the average procurement professional within M/OAA has five years or less of experience in federal procurement policy and procedures. USAID established a Professional Development and Training Division in the fourth quarter of fiscal year 2013 to implement acquisition and assistance training. This division is intended to develop a formal process for learning across the acquisition workforce, but it is not focused specifically on training needs in contingency environments. USAID’s Section 850 report did not address reliance on contractors in connection with contingency operations, but in October 2013, USAID drafted a revision of its planning policy to require a risk assessment and mitigation plan associated with contractor performance of critical functions in overseas contingency operations. The draft revision will undergo an agency clearance process before becoming policy in 2014. According to the draft policy, the risk assessments will consider the core capabilities of government personnel and the risk of overreliance on contractors to monitor other contractors, among other things. The draft policy notes that the mitigation plan will include specific actions to mitigate or reduce the risks and impacts noted in the risk assessment, including the development of alternative capabilities to reduce reliance on contractors for critical functions. State and USAID have faced numerous contract management and oversight challenges while operating in contingency environments such as Iraq and Afghanistan. These challenges highlight the importance of effectively leveraging knowledge and developing ways to identify, mitigate, or avoid contracting pitfalls before new contingencies arise. Each agency has to varying degrees assessed, identified, and started implementing changes to improve its ability to overcome inherent risks of contracting in uncertain environments. State has outlined a series of initiatives to address weaknesses in areas such as the collection of contractor data, acquisition planning, contract oversight, risk management, and interagency coordination, and plans to institutionalize many of these changes in the department’s Foreign Affairs Manual within the next year. Except for a limited number of cases, State generally has not developed plans to assess the impact of these initiatives. Federal internal control standards highlight the importance of reviews by management to compare actual performance to planned or expected results and analyze significant differences. As a result, continued management attention is needed to ensure that these efforts achieve their intended objectives. While USAID has identified some needed improvements, such as completing contractor performance evaluations, it did not assess whether the procedures and practices created by the missions or offices that operate in contingency environments should be reflected in agency-wide policy or guidance. As a result, USAID may have missed opportunities to leverage its institutional knowledge. USAID recently established a nonpermissive working group responsible for developing lessons learned, toolkits, and training, which affords USAID another opportunity to take better advantage of its institutional knowledge. State and USAID have increased the size of their acquisition workforces in the past 3 years, and both have efforts in place to better ensure acquisition personnel are equipped with the skills needed to support future contingency operations. Both agencies have taken some initial steps to address reliance on contractors and to assess the appropriate mix of government and contractor personnel, but these efforts are in their infancy. To ensure that State is in a better position to support future contingencies, we recommend that the Secretary of State develop plans to assess whether planned initiatives are achieving their intended objectives. To ensure that USAID has the necessary policies and procedures to better position itself to address future contingency challenges, we recommend that the Administrator of USAID ensure that its nonpermissive working group consider procedures and practices developed by missions and offices with contingency-related responsibilities during the course of its efforts. We provided a draft of this report to State and USAID. In their written comments, the two agencies concurred with our recommendations and provided information on actions taken or planned to address them. Specifically, State created the Critical Environment Contract Analytics Staff on December 19, 2013 to develop and prepare department-wide contracting risk assessments and risk mitigation plans, coordinate efforts with other agencies, and monitor procurement readiness for contracting operations in critical environments. Additionally, the staff will be responsible for using metrics to assess the effectiveness and performance of planned initiatives. USAID plans to create a supplementary working group to the nonpermissive environment working group that will reach out to missions, offices, and contracting personnel with contingency operations experience to collect and disseminate a set of best practices for contracting in support of contingency operations and other potentially dangerous or uncertain environments. State’s letter is reprinted in appendix IV and USAID’s letter is in Appendix V. Both agencies provided technical comments which we incorporated into the report as appropriate. We are sending copies of this report to interested congressional committees, the Secretary of State, and the Administrator of USAID. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have questions about this report, please contact me at (202) 512-4841 or dinapolit@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. Section 850(c) of the National Defense Authorization Act (NDAA) for Fiscal Year 2013 mandated that we report on the progress that the Department of State (State) and the U.S. Agency for International Development (USAID) have made in identifying and implementing improvements in a range of areas related to contract support for overseas contingency operations. The objectives for this review were to examine the extent to which State and USAID have identified and implemented changes to the agencies’ (1) organizational structures and policies; and (2) workforces, including their use of contractors. For the purposes of our review, we used the definition of overseas contingency operations set forth in Title 10 of the U.S. Code, as well as other contingency environments with characteristics such as unique security and logistical challenges; the need to contract quickly; difficulty in conducting oversight; difficulty traveling to dangerous or remote locations; events occurring unexpectedly; and frequent rotations among personnel. For all objectives, we reviewed State and USAID’s Section 850 reports submitted to Congress in June and July, 2013, respectively; interviewed officials at State and USAID in the United States and Afghanistan with related program, acquisition, or workforce planning responsibilities to discuss their role in identifying and implementing changes, as appropriate; and reviewed and analyzed GAO and other oversight reports, including those from State and USAID’s Offices of the Inspector General, the Commission on Wartime Contracting, and the Special Inspectors General for Iraq and Afghanistan to identify key challenges reported by the accountability community. To determine the extent to which State and USAID have identified and implemented changes related to their organizational structures, we reviewed State and USAID’s organizational charts and agency policy outlining roles and responsibilities in overseas contingency operations. To complement this information, we conducted interviews with key offices identified in the agencies’ Section 850 reports to obtain additional information regarding their roles and responsibilities in overseas contingency operations. In the case of State, these offices included the Office of Management Policy, Rightsizing, and Innovation; Office of Acquisitions Management; Deputy Assistant Secretary for Logistics Management; Kabul General Services Office; and the Afghanistan Transition Office. In the case of USAID, these offices included the Bureau for Management/Office of Acquisition and Assistance, the Office of Afghanistan and Pakistan Affairs, and the Office of Foreign Disaster Assistance and the Office of Transition Initiatives within USAID’s Bureau for Democracy Conflict and Humanitarian Assistance. We did not independently assess the adequacy of either State or USAID’s organizational structure. To determine the extent to which State and USAID have identified and implemented changes to the agencies’ contract award and management policies, we collected and analyzed agency documentation, such as descriptions of related working groups, acquisition and quality assurance plans, and draft working group charters. We also reviewed agency-wide and individual mission contracting policies and guidance, such as USAID’s Automated Directives System and Acquisition and Assistance Policy Directive, and State’s Foreign Affairs Manual and Procurement Information Bulletins, as well as relevant sections of the Federal Acquisition Regulation. We compared changes identified by the agencies in their Section 850 reports to challenges and potential changes identified in prior GAO and other oversight reports, to determine the extent to which the agencies are addressing these challenges. To determine the extent to which State and USAID have identified and implemented changes related to their workforces for contract support in overseas contingency operations, we collected and analyzed detailed data on the composition of State and USAID’s acquisition workforces. Specifically, we reviewed and compiled acquisition workforce data for fiscal years 2011 to 2013 from each agency’s March 2013 acquisition human capital plan. We included data from the following categories: contracting officers, contracting specialists, contracting officer representatives, and program or project managers. We collected updated data on State and USAID’s acquisition workforces as of November 2013. We used this information to describe characteristics of each agency’s acquisition workforce and, as such, did not assess the reliability of the data. We reviewed a March 2013 USAID report on worldwide staffing patterns and other USAID data on its acquisition workforce for its five critical priority countries. We also reviewed various workforce related reports, such State and USAID’s 5-year succession plans and the Federal Acquisition Institute annual federal acquisition workforce reports. We also reviewed acquisition workforce guidance and memorandums from the 2009 Office of Management and Budget multisector workforce guidance and its acquisition workforce development strategic plan. We conducted this performance audit from March 2013 to February 2014 in accordance with generally accepted government auditing standards. These standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Description of Department of State’s (State) Contracting Organization for Contingency Operations Roles and responsibilities The Under Secretary for Management leads several bureaus, such as Administration and Diplomatic Security. As the leader of these bureaus, the Under Secretary is responsible for major contingency contracting policy decisions. The Assistant Secretary for Administration serves as State’s Chief Acquisition Officer and advises the Under Secretary for Management on the applicability of relevant policy on contracts for overseas operations, including contingency operations, and ensuring compliance of the contracts and contracting activities with such policies. The Office of the Procurement Executive supports the Chief Acquisition Officer and is responsible for promulgating acquisition policy, providing oversight, and defining and presenting acquisition training. The Office of Acquisitions Management is responsible for providing a full range of contracting services to support activities across State, including acquisition planning, contract negotiations, cost and price analysis, and contract administration. The office reports to the Deputy Assistant Secretary for Logistics Management. The Deputy Assistant Secretary for Logistics Management oversees the integration of logistics and acquisition and ensures complete supply chain accountability. Functional bureaus such as the International Narcotics and Law Enforcement Affairs, Overseas Buildings Operations, Human Resources, and Diplomatic Security are responsible for identifying and defining contracting requirements, providing technical contract administration, providing program and project management support, and training within their mission. Appendix III: Description of U.S. Agency for International Development’s (USAID) Contracting Organization for Contingency Operations Roles and responsibilities The Office of Acquisition and Assistance within the Bureau for Management is responsible for (1) developing, issuing, and maintaining the agency’s acquisition regulations, procedures, and standards in accordance with established agency delegations and requirements; and (2) evaluating the agency’s procurement system, providing recommendations for selecting and appointing contracting officers and terminating their appointments, and providing technical support to overseas contracting officers. Additionally, in accordance with Section 849 of the 2013 NDAA, the Director of the Office of Acquisition and Assistance advises the agency on the applicability of relevant policies on contracts for overseas contingency operations as defined in 10 U.S.C. § 101(a)(13) and ensures the compliance of contracting activities with this policy. The Assistant Administrator for the Bureau for Democracy, Conflict, and Humanitarian Assistance has authority to negotiate, execute, and amend contracts for the purpose of immediately responding to disasters overseas. This authority is limited to grants and cooperative agreements up to $3 million, and contracts up to $500,000. The Office of Foreign Disaster Assistance and Office of Transition Initiatives are typically USAID’s first responders to a contingency situation. The Office of Civilian Response provides reconstruction and stabilization support, and provides staff surge support for contracting, including those seconded to the Office of Acquisition and Assistance for assignment. USAID acquisition and assistance staff at overseas missions provide advice and support to mission staff that design and manage assistance activities; they also have overall responsibility for the administration of acquisition instruments at overseas missions. Acquisition offices are typically headed by a contracting officer, who reports to the mission director, the principal USAID officer at post, or deputy mission director. At most bilateral missions, contracting officers are co-located with acquisition specialists and contracting officer’s representatives (COR). However, under USAID’s regional mission structure, contracting officers provide acquisition support to more than one mission and are not necessarily co-located with either the CORs or the acquisition specialists who assist them. Some missions with no on-site contracting officers may instead have on-site acquisition specialists who provide support to CORs. In addition to the contact named above, W. William Russell, Assistant Director; Peter Anderson; Lynn Cothern; Leigh Ann Haydon; Amber Keyser; John Krump; Anne McDonough-Hughes; Eddie Uyekawa; and Andrea Yohe made key contributions to this report. | For more than a decade, State and USAID have used contractors extensively to help carry out missions in contingency operations, such as those in Iraq and Afghanistan. While State and USAID transition to more traditional diplomatic and assistance missions in Iraq and Afghanistan, contract management and oversight challenges remain significant because the agencies are likely to be called upon again to operate in future contingencies. Section 850(a) of the Fiscal Year 2013 NDAA directed State and USAID to assess their organizational structures, policies, and workforces related to contract support for overseas contingency operations. Section 850(c) mandated that GAO report on the progress State and USAID have made in identifying and implementing improvements related to those areas. GAO analyzed the extent to which State and USAID have identified and implemented changes to their (1) organizational structures and policies; and (2) workforces, including their use of contractors. GAO analyzed State and USAID's Section 850 reports to Congress, contract policies and procedures, and 2013 acquisition human capital plans, and interviewed agency officials. The Department of State (State) and U.S. Agency for International Development (USAID) identified a number of changes needed to improve contract support in overseas contingency operations, but have not completed implementation efforts. As required by the Fiscal Year 2013 National Defense Authorization Act (NDAA), both agencies determined that their organizational structures were effective, though State created a new regional Contract Management Office to better support contracting efforts in Iraq. In October 2013, State approved a number of actions to improve policies and procedures, including specific initiatives in acquisition planning and risk management, among others, and intends to institutionalize these changes in its Foreign Affairs Manual in 2014. State generally has not, however, developed plans to assess the impact of these initiatives. Federal internal control standards highlight the importance of managers comparing actual performance to expected results. Accordingly, continued management attention is needed to ensure that these efforts achieve their intended objectives. USAID focused its efforts on areas such as improving contractor performance evaluations and risk management. GAO found that some USAID missions and offices that operate in contingency environments have developed procedures and practices, but USAID did not consider whether these should be institutionalized agency-wide because USAID officials interpreted the legislative requirement to include only a review of agency-wide policies. As a result, USAID may have missed opportunities to leverage its institutional knowledge to better support future contingencies. USAID established a new working group in October 2013 to develop lessons learned, toolkits, and training and is expected to complete its efforts in late 2014. This working group could further assess the policies and procedures developed by the missions and offices, thus potentially affording USAID an opportunity to better leverage its institutional knowledge. State and USAID have increased their acquisition workforce by 53 and 15 percent, respectively, from their 2011 levels and are in various stages of assessing their workforce needs for overseas contingency operations. Per Office of Management and Budget guidance, both agencies identified competency and skill gaps for their acquisition workforce in their 2013 acquisition human capital plans. State's 2013 plan noted that in response to growth in contracting activity in areas such as Iraq and Afghanistan, additional acquisition personnel are needed. In October 2013, State's Under Secretary for Management approved the formation of a multibureau working group that plans to further explore workforce needs for current and future contingency operations. USAID's 2013 plan cited its greatest challenge as providing training for its acquisition workforce, as many personnel have 5 years or less of contracting experience. USAID established a training division in 2013 for its acquisition workforce. State noted in its Section 850 report that it will increase its focus on conducting risk assessments on the reliance, use, and oversight of contractors through the establishment of risk management staff. USAID's Section 850 report did not address reliance on contractors, but in October 2013, USAID drafted a revision to its planning policy that will require a risk assessment and mitigation plan associated with contractor performance of critical functions in overseas contingency operations. GAO recommends that State assess whether identified changes achieve intended objectives, and that USAID further assess contingency contracting related procedures and practices. State and USAID concurred with the recommendations. |
According to the President’s budget, the federal government plans to invest more than $96 billion on IT in fiscal year 2018—the largest amount ever. However, as we have previously reported, investments in federal IT too often result in failed projects that incur cost overruns and schedule slippages, while contributing little to the desired mission-related outcomes. For example: The Department of Veterans Affairs’ Scheduling Replacement Project was terminated in September 2009 after spending an estimated $127 million over 9 years. The tri-agency National Polar-orbiting Operational Environmental Satellite System was halted in February 2010 by the White House’s Office of Science and Technology Policy after the program spent 16 years and almost $5 billion. The Department of Homeland Security’s Secure Border Initiative Network program was ended in January 2011, after the department obligated more than $1 billion for the program. The Office of Personnel Management’s Retirement Systems Modernization program was canceled in February 2011, after spending approximately $231 million on the agency’s third attempt to automate the processing of federal employee retirement claims. The Department of Veterans Affairs’ Financial and Logistics Integrated Technology Enterprise program was intended to be delivered by 2014 at a total estimated cost of $609 million, but was terminated in October 2011. The Department of Defense’s Expeditionary Combat Support System was canceled in December 2012 after spending more than a billion dollars and failing to deploy within 5 years of initially obligating funds. Our past work found that these and other failed IT projects often suffered from a lack of disciplined and effective management, such as project planning, requirements definition, and program oversight and governance. In many instances, agencies had not consistently applied best practices that are critical to successfully acquiring IT. Federal IT projects have also failed due to a lack of oversight and governance. Executive-level governance and oversight across the government has often been ineffective, specifically from chief information officers (CIO). For example, we have reported that some CIOs’ authority was limited because they did not have the authority to review and approve the entire agency IT portfolio. Recognizing the severity of issues related to the government-wide management of IT, FITARA was enacted in December 2014. The law was intended to improve agencies’ acquisitions of IT and enable Congress to monitor agencies’ progress and hold them accountable for reducing duplication and achieving cost savings. FITARA includes specific requirements related to seven areas. Federal data center consolidation initiative (FDCCI). Agencies are required to provide OMB with a data center inventory, a strategy for consolidating and optimizing their data centers (to include planned cost savings), and quarterly updates on progress made. The law also requires OMB to develop a goal for how much is to be saved through this initiative, and provide annual reports on cost savings achieved. Enhanced transparency and improved risk management. OMB and covered agencies are to make detailed information on federal IT investments publicly available, and agency CIOs are to categorize their IT investments by level of risk. Additionally, in the case of major IT investments rated as high risk for 4 consecutive quarters, the law requires that the agency CIO and the investment’s program manager conduct a review aimed at identifying and addressing the causes of the risk. Agency CIO authority enhancements. CIOs at covered agencies are required to (1) approve the IT budget requests of their respective agencies, (2) certify that OMB’s incremental development guidance is being adequately implemented for IT investments, (3) review and approve contracts for IT, and (4) approve the appointment of other agency employees with the title of CIO. See appendix I for details on the current status of federal CIOs. Portfolio review. Agencies are to annually review IT investment portfolios in order to, among other things, increase efficiency and effectiveness and identify potential waste and duplication. In establishing the process associated with such portfolio reviews, the law requires OMB to develop standardized performance metrics, to include cost savings, and to submit quarterly reports to Congress on cost savings. Expansion of training and use of IT acquisition cadres. Agencies are to update their acquisition human capital plans to address supporting the timely and effective acquisition of IT. In doing so, the law calls for agencies to consider, among other things, establishing IT acquisition cadres or developing agreements with other agencies that have such cadres. Government-wide software purchasing program. The General Services Administration is to develop a strategic sourcing initiative to enhance government-wide acquisition and management of software. In doing so, the law requires that, to the maximum extent practicable, the General Services Administration should allow for the purchase of a software license agreement that is available for use by all executive branch agencies as a single user. Maximizing the benefit of the Federal Strategic Sourcing Initiative. Federal agencies are required to compare their purchases of services and supplies to what is offered under the Federal Strategic Sourcing Initiative. OMB is also required to issue regulations related to the initiative. In June 2015, OMB released guidance describing how agencies are to implement FITARA. This guidance is intended to, among other things: assist agencies in aligning their IT resources with statutory establish government-wide IT management controls that will meet the law’s requirements, while providing agencies with flexibility to adapt to unique agency processes and requirements; clarify the CIO’s role and strengthen the relationship between agency CIOs and bureau CIOs; and strengthen CIO accountability for IT costs, schedules, performance, and security. The guidance identified several actions that agencies were to take to establish a basic set of roles and responsibilities (referred to as the common baseline) for CIOs and other senior agency officials, which were needed to implement the authorities described in the law. For example, agencies were required to conduct a self-assessment and submit a plan describing the changes they intended to make to ensure that common baseline responsibilities were implemented. Agencies were to submit their plans to OMB’s Office of E-Government and Information Technology by August 15, 2015, and make portions of the plans publicly available on agency websites no later than 30 days after OMB approval. As of November 2016, all agencies had made their plans publicly available. In addition, in August 2016, OMB released guidance intended to, among other things, define a framework for achieving the data center consolidation and optimization requirements of FITARA. The guidance includes requirements for agencies to: maintain complete inventories of all data center facilities owned, operated, or maintained by or on behalf of the agency; develop cost savings targets for fiscal years 2016 through 2018 and report any actual realized cost savings; and measure progress toward meeting optimization metrics on a quarterly basis. The guidance also directs agencies to develop a data center consolidation and optimization strategic plan that defines the agency’s data center strategy for fiscal years 2016, 2017, and 2018. This strategy is to include, among other things, a statement from the agency CIO stating whether the agency has complied with all data center reporting requirements in FITARA. Further, the guidance indicates that OMB is to maintain a public dashboard that will display consolidation-related costs savings and optimization performance information for the agencies. In February 2015, we introduced a new government-wide high-risk area, Improving the Management of IT Acquisitions and Operations. This area highlighted several critical IT initiatives in need of additional congressional oversight, including (1) reviews of troubled projects; (2) efforts to increase the use of incremental development; (3) efforts to provide transparency relative to the cost, schedule, and risk levels for major IT investments; (4) reviews of agencies’ operational investments; (5) data center consolidation; and (6) efforts to streamline agencies’ portfolios of IT investments. We noted that implementation of these initiatives was inconsistent and more work remained to demonstrate progress in achieving IT acquisition and operation outcomes. Further, our February 2015 high-risk report stated that, beyond implementing FITARA, OMB and agencies needed to continue to implement our prior recommendations in order to improve their ability to effectively and efficiently invest in IT. Specifically, from fiscal years 2010 through 2015, we made 803 recommendations to OMB and federal agencies to address shortcomings in IT acquisitions and operations. These recommendations included many to improve the implementation of the aforementioned six critical IT initiatives and other government-wide, cross-cutting efforts. We stressed that OMB and agencies should demonstrate government-wide progress in the management of IT investments by, among other things, implementing at least 80 percent of our recommendations related to managing IT acquisitions and operations within 4 years. In February 2017, we issued an update to our high-risk series and reported that, while progress had been made in improving the management of IT acquisitions and operations, significant work still remained to be completed. For example, as of May 2017, OMB and the agencies had fully implemented 380 (or about 47 percent) of the 803 recommendations. This was a 24 percent increase compared to the percentage we reported as being fully implemented in 2015. Figure 1 summarizes the progress that OMB and the agencies had made in addressing our recommendations, as compared to the 80 percent target, as of May 2017. In addition, in fiscal year 2016, we made 202 new recommendations, thus further reinforcing the need for OMB and agencies to address the shortcomings in IT acquisitions and operations. Also, beyond addressing our prior recommendations, our 2017 high-risk update noted the importance of OMB and federal agencies continuing to expeditiously implement the requirements of FITARA. To further explore the challenges and opportunities to improve federal IT acquisitions and operations, we convened a forum on September 14, 2016, to explore challenges and opportunities for CIOs to improve federal IT acquisitions and operations—with the goal of better informing policymakers and government leadership. Forum participants, which included 13 current and former federal agency CIOs, members of Congress, and private sector IT executives, identified key actions related to seven topics: (1) strengthening FITARA, (2) improving CIO authorities, (3) budget formulation, (4) governance, (5) workforce, (6) operations, and (7) transition planning. A summary of the key actions, by topic area, identified during the forum is provided in figure 2. In addition, in January 2017, the Federal CIO Council concluded that differing levels of authority over IT-related investments and spending have led to inconsistencies in how IT is executed from agency to agency. According to the Council, for those agencies where the CIO has broad authority to manage all IT investments, great progress has been made to streamline and modernize the federal agency’s footprint. For the others, where agency CIOs are only able to control pieces of the total IT footprint, it has been harder to achieve improvements. The administration has initiated two efforts aimed at improving federal IT. Specifically, in March 2017, it established the Office of American Innovation to, among other things, improve federal government operations and services, and modernize federal IT. The office is to consult with both OMB and the Office of Science and Technology Policy on policies and plans intended to improve government operations and services, improve the quality of life for Americans, and spur job creation. In May 2017, the administration also established the American Technology Council to help transform and modernize federal IT and how the government uses and delivers digital services. The President is the chairman of this council, and the Federal CIO and the United States Digital Service administrator are members. Agencies have taken steps to improve the management of IT acquisitions and operations by implementing key FITARA initiatives. However, agencies would be better positioned to fully implement the law and, thus, realize additional management improvements, if they addressed the numerous recommendations we have made aimed at improving data center consolidation, increasing transparency via OMB’s IT Dashboard, implementing incremental development, and managing software licenses. One of the key initiatives to implement FITARA is data center consolidation. OMB established FDCCI in February 2010 to improve the efficiency, performance, and environmental footprint of federal data center activities and the enactment of FITARA reinforced the initiative. However, in a series of reports that we issued over the past 6 years, we noted that, while data center consolidation could potentially save the federal government billions of dollars, weaknesses existed in several areas, including agencies’ data center consolidation plans and OMB’s tracking and reporting on related cost savings. In these reports, we made a total of 141 recommendations to OMB and 24 agencies to improve the execution and oversight of the initiative. Most agencies and OMB agreed with our recommendations or had no comments. As of May 2017, 75 of our recommendations remained open. Also, in May 2017, we reported that the 24 agencies participating in FDCCI collectively had made progress on their data center closure efforts. Specifically, as of August 2016, these agencies had identified a total of 9,995 data centers, of which they reported having closed 4,388, and having plans to close a total of 5,597 data centers through fiscal year 2019. Notably, the Departments of Agriculture, Defense, the Interior, and the Treasury accounted for 84 percent of the completed closures. In addition, 18 of the 24 agencies reported achieving about $2.3 billion collectively in cost savings and avoidances from their data center consolidation and optimization efforts from fiscal year 2012 through August 2016. The Departments of Commerce, Defense, Homeland Security, and the Treasury accounted for approximately $2.0 billion (or 87 percent) of the total. Further, 23 agencies reported about $656 million collectively in planned savings for fiscal years 2016 through 2018. This is about $3.3 billion less than the estimated $4.0 billion in planned savings for fiscal years 2016 through 2018 that agencies reported to us in November 2015. Figure 3 presents a comparison of the amounts of cost savings and avoidances reported by agencies to OMB and the amounts the agencies reported to us. As mentioned previously, FITARA required agencies to submit multi-year strategies to achieve the consolidation and optimization of their data centers no later than the end of fiscal year 2016. Among other things, this strategy was to include such information as data center consolidation and optimization metrics, and year-by-year calculations of investments and cost savings through October 1, 2018. Further, OMB’s August 2016 guidance on data center optimization contained additional information for how agencies are to implement the strategic plan requirements of FITARA. Specifically, the guidance stated that agency data center consolidation and optimization strategic plans are to include, among other things, planned and achieved performance levels for each optimization metric; calculations of target and actual agency- wide spending and cost savings on data centers; and historical cost savings and cost avoidances due to data center consolidation and optimization. OMB’s guidance also stated that agencies were required to publicly post their strategic plans to their agency-owned digital strategy websites by September 30, 2016. As of April 2017, only 7 of the 23 agencies that submitted their strategic plans—the Departments of Agriculture, Education, Homeland Security, and Housing and Urban Development; the General Services Administration; the National Science Foundation; and the Office of Personnel Management—had addressed all five elements required by the OMB memorandum implementing FITARA. The remaining 16 agencies either partially met or did not meet the requirements. For example, most agencies partially met or did not meet the requirements to provide information related to data center closures and cost savings metrics. The Department of Defense did not submit a plan and was rated as not meeting any of the requirements. To better ensure that federal data center consolidation and optimization efforts improve governmental efficiency and achieve cost savings, in our May 2017 report, we recommended that 11 of the 24 agencies take action to ensure that the amounts of achieved data center cost savings and avoidances are consistent across all reporting mechanisms. We also recommended that 17 of the 24 agencies each take action to complete missing elements in their strategic plans and submit their plans to OMB in order to optimize their data centers and achieve cost savings. Twelve agencies agreed with our recommendations, 2 did not agree, and 10 agencies and OMB did not state whether they agreed or disagreed. To facilitate transparency across the government in acquiring and managing IT investments, OMB established a public website—the IT Dashboard—to provide detailed information on major investments at 26 agencies, including ratings of their performance against cost and schedule targets. Among other things, agencies are to submit ratings from their CIOs, which, according to OMB’s instructions, should reflect the level of risk facing an investment relative to that investment’s ability to accomplish its goals. In this regard, FITARA includes a requirement for CIOs to categorize their major IT investment risks in accordance with OMB guidance. Over the past 6 years, we have issued a series of reports about the Dashboard that noted both significant steps OMB has taken to enhance the oversight, transparency, and accountability of federal IT investments by creating its Dashboard, as well as concerns about the accuracy and reliability of the data. In total, we have made 47 recommendations to OMB and federal agencies to help improve the accuracy and reliability of the information on the Dashboard and to increase its availability. Most agencies agreed with our recommendations or had no comments. As of May 2017, 17 of these recommendations have been implemented. In June 2016, we determined that 13 of the 15 agencies selected for in- depth review had not fully considered risks when rating their major investments on the Dashboard. Specifically, our assessments of risk for 95 investments at the 15 selected agencies matched the CIO ratings posted on the Dashboard 22 times, showed more risk 60 times, and showed less risk 13 times. Figure 4 summarizes how our assessments compared to the selected investments’ CIO ratings. Aside from the inherently judgmental nature of risk ratings, we identified three factors which contributed to differences between our assessments and the CIO ratings: Forty of the 95 CIO ratings were not updated during April 2015 (the month we conducted our review), which led to differences between our assessments and the CIOs’ ratings. This underscores the importance of frequent rating updates, which help to ensure that the information on the Dashboard is timely and accurately reflects recent changes to investment status. Three agencies’ rating processes spanned longer than 1 month. Longer processes mean that CIO ratings are based on older data, and may not reflect the current level of investment risk. Seven agencies’ rating processes did not focus on active risks. According to OMB’s guidance, CIO ratings should reflect the CIO’s assessment of the risk and the investment’s ability to accomplish its goals. CIO ratings that do no incorporate active risks increase the chance that ratings overstate the likelihood of investment success. As a result, we concluded that the associated risk rating processes used by the 15 agencies were generally understating the level of an investment’s risk, raising the likelihood that critical federal investments in IT are not receiving the appropriate levels of oversight. To better ensure that the Dashboard ratings more accurately reflect risk, we recommended that the 15 agencies take actions to improve the quality and frequency of their CIO ratings. Twelve agencies generally agreed with or did not comment on the recommendations and three agencies disagreed, stating that their CIO ratings were adequate. However, we noted that weaknesses in these three agencies’ processes still existed and that we continued to believe our recommendations were appropriate. As of May 2017, these recommendations have not yet been fully implemented. OMB has emphasized the need to deliver investments in smaller parts, or increments, in order to reduce risk, deliver capabilities more quickly, and facilitate the adoption of emerging technologies. In 2010, it called for agencies’ major investments to deliver functionality every 12 months and, since 2012, every 6 months. Subsequently, FITARA codified a requirement that agency CIOs certify that IT investments are adequately implementing OMB’s incremental development guidance. However, in May 2014, we reported that 66 of 89 selected investments at five major agencies did not plan to deliver capabilities in 6-month cycles, and less than half of these investments planned to deliver functionality in 12-month cycles. We also reported that only one of the five agencies had complete incremental development policies. Accordingly, we recommended that OMB clarify its guidance on incremental development and that the selected agencies update their associated policies to comply with OMB’s revised guidance (once made available), and consider the factors identified in our report when doing so. Four of the six agencies agreed with our recommendations or had no comments, one agency partially agreed, and the remaining agency disagreed with the recommendations. The agency that disagreed did not believe that its recommendations should be dependent upon OMB taking action to update guidance. In response, we noted that only one of the recommendations to that agency depended upon OMB action, and we maintained that the action was warranted and could be implemented. Subsequently, in August 2016, we reported that agencies had not fully implemented incremental development practices for their software development projects. Specifically, we noted that, as of August 31, 2015, 22 federal agencies had reported on the Dashboard that 300 of 469 active software development projects (approximately 64 percent) were planning to deliver usable functionality every 6 months for fiscal year 2016, as required by OMB guidance. Table 1 lists the total number and percent of federal software development projects for which agencies reported plans to deliver functionality every 6 months for fiscal year 2016. Regarding the remaining 169 projects (or 36 percent) that were reported as not planning to deliver functionality every 6 months, agencies provided a variety of explanations for not achieving that goal. These included project complexity, the lack of an established project release schedule, or that the project was not a software development project. Further, in conducting an in-depth review of seven selected agencies’ software development projects, we determined that 45 percent of the projects delivered functionality every 6 months for fiscal year 2015 and 55 percent planned to do so in fiscal year 2016. However, significant differences existed between the delivery rates that the agencies reported to us and what they reported on the Dashboard. For example, for four agencies (the Departments of Commerce, Education, Health and Human Services, and Treasury), the percentage of delivery reported to us was at least 10 percentage points lower than what was reported on the Dashboard. These differences were due to (1) our identification of fewer software development projects than agencies reported on the Dashboard and (2) the fact that information reported to us was generally more current than the information reported on the Dashboard. We concluded that, by not having up-to-date information on the Dashboard about whether the project is a software development project and about the extent to which projects are delivering functionality, these seven agencies were at risk that OMB and key stakeholders may make decisions regarding the agencies’ investments without the most current and accurate information. As such, we recommended that the seven selected agencies review major IT investment project data reported on the Dashboard and update the information as appropriate, ensuring that these data are consistent across all reporting channels. Finally, while OMB has issued guidance requiring agency CIOs to certify that each major IT investment’s plan for the current year adequately implements incremental development, only three agencies (the Departments of Commerce, Homeland Security, and Transportation) had defined processes and policies intended to ensure that the CIOs certify that major IT investments are adequately implementing incremental development. Accordingly, we recommended that the remaining four agencies—the Departments of Defense, Education, Health and Human Services, and the Treasury—establish policies and processes for certifying that major IT investments adequately use incremental development. The Departments of Education and Health and Human Services agreed with our recommendation, while the Department of Defense disagreed and stated that its existing policies address the use of incremental development. However, we noted that the department’s policies did not comply with OMB’s guidance and that we continued to believe our recommendation was appropriate. The Department of the Treasury did not comment on its recommendation. In total, we have made 23 recommendations to OMB and agencies to improve their implementation of incremental development. As of May 2017, 17 of our recommendations remained open. Federal agencies engage in thousands of software licensing agreements annually. The objective of software license management is to manage, control, and protect an organization’s software assets. Effective management of these licenses can help avoid purchasing too many licenses, which can result in unused software, as well as too few licenses, which can result in noncompliance with license terms and cause the imposition of additional fees. As part of its PortfolioStat initiative, OMB has developed policy that addresses software licenses. This policy requires agencies to conduct an annual, agency-wide IT portfolio review to, among other things, reduce commodity IT spending. Such areas of spending could include software licenses. In May 2014, we reported on federal agencies’ management of software licenses and determined that better management was needed to achieve significant savings government-wide. In particular, 22 of the 24 major agencies did not have comprehensive license policies and only 2 had comprehensive license inventories. In addition, we identified five leading software license management practices, and the agencies’ implementation of these practices varied. As a result of agencies’ mixed management of software licensing, agencies’ oversight of software license spending was limited or lacking, thus, potentially leading to missed savings. However, the potential savings could be significant considering that, in fiscal year 2012, 1 major federal agency reported saving approximately $181 million by consolidating its enterprise license agreements, even when its oversight process was ad hoc. Accordingly, we recommended that OMB issue needed guidance to agencies; we also made 135 recommendations to the 24 agencies to improve their policies and practices for managing licenses. Among other things, we recommended that the agencies regularly track and maintain a comprehensive inventory of software licenses and analyze the inventory to identify opportunities to reduce costs and better inform investment decision making. Most agencies generally agreed with the recommendations or had no comments. As of May 2017, 123 of the recommendations had not been implemented, but 4 agencies had made progress. For example, three agencies—the Department of Education, General Services Administration, and U.S. Agency for International Development—regularly track and maintain a comprehensive inventory of software licenses. In addition, two of these agencies also analyze agency-wide software licensing data to identify opportunities to reduce costs and better inform investment decision making. The National Aeronautics and Space Administration uses its inventory to make decisions and reduce costs, but does not regularly track and maintain a comprehensive inventory. While the other agencies had not completed the actions associated with these recommendations, they had plans in place to do so. Table 2 reflects the extent to which agencies implemented recommendations in these areas. In conclusion, with the enactment of FITARA, the federal government has an opportunity to improve the transparency and management of IT acquisitions and operations, and to strengthen the authority of CIOs to provide needed direction and oversight. The forum we held also recommended that CIOs be given more authority, and noted the important role played by the Federal CIO. Most agencies have taken steps to improve the management of IT acquisitions and operations by implementing key FITARA initiatives, including data center consolidation, efforts to increase transparency via OMB’s IT Dashboard, incremental development, and management of software licenses; and they have continued to address recommendations we have made over the past several years. However, additional improvements are needed, and further efforts by OMB and federal agencies to implement our previous recommendations would better position them to fully implement FITARA. To help ensure that these efforts succeed, OMB’s and agencies’ continued implementation of FITARA is essential. In addition, we will continue to monitor agencies’ implementation of our previous recommendations. Chairmen Meadows and Hurd, Ranking Members Connolly and Kelly, and Members of the Subcommittees, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. If you or your staffs have any questions about this testimony, please contact me at (202) 512-9286 or at pownerd@gao.gov. Individuals who made key contributions to this testimony are Kevin Walsh (Assistant Director), Chris Businsky, Rebecca Eyler, and Jessica Waselkow (Analyst in Charge). As of May 2017, 9 of the 25 federal CIO positions were filled by acting CIOs that do not permanently hold the position. Of the 9, 2 were career positions and the remaining positions require some form of appointment. Table 3 summarizes the status of the CIO position at the federal level. | The federal government plans to invest almost $96 billion on IT in fiscal year 2018. Historically, these investments have too often failed, incurred cost overruns and schedule slippages, or contributed little to mission-related outcomes. Accordingly, in December 2014, Congress enacted FITARA, aimed at improving agencies' acquisitions of IT. Further, in February 2015, GAO added improving the management of IT acquisitions and operations to its high-risk list. This statement summarizes agencies' progress in improving the management of IT acquisitions and operations. This statement is based on GAO prior and recently published reports on (1) data center consolidation, (2) risk levels of major investments as reported on OMB's IT Dashboard, (3) implementation of incremental development practices, and (4) management of software licenses. The Office of Management and Budget (OMB) and federal agencies have taken steps to improve information technology (IT) through a series of initiatives, and as of May 2017, had fully implemented about 47 percent of the approximately 800 related GAO recommendations. However, additional actions are needed. Consolidating data centers . OMB launched an initiative in 2010 to reduce data centers, which was reinforced by the Federal Information Technology Acquisition Reform Act (FITARA) in 2014. GAO reported in May 2017 that agencies had closed 4,388 of the 9,995 total data centers, and had plans to close a total of 5,597 through fiscal year 2019. As a result, agencies reportedly saved or avoided about $2.3 billion through August 2016. However, out of the 23 agencies that submitted required strategic plans, only 7 had addressed all required elements. GAO recommended that agencies complete their plans to optimize their data centers and achieve cost savings and ensure reported cost savings are consistent across reporting mechanisms. Most agencies agreed with the recommendations. Enhancing transparency . OMB's IT Dashboard provides information on major investments at federal agencies, including ratings from Chief Information Officers that should reflect the level of risk facing an investment. GAO reported in June 2016 that agencies had not fully considered risks when rating their investments on the Dashboard. In particular, of the 95 investments reviewed, GAO's assessments of risks matched the ratings 22 times, showed more risk 60 times, and showed less risk 13 times. GAO recommended that agencies improve the quality and frequency of their ratings. Most agencies generally agreed with or did not comment on the recommendations. Implementing incremental development . OMB has emphasized the need for agencies to deliver investments in smaller parts, or increments, in order to reduce risk and deliver capabilities more quickly. Since 2012, OMB has required investments to deliver functionality every 6 months. In August 2016, GAO reported that while 22 agencies had reported that about 64 percent of 469 active software development projects planned to deliver usable functionality every 6 months for fiscal year 2016, the other 36 percent of the projects did not. Further, for 7 selected agencies, GAO identified differences in the percentages of software projects reported to GAO as delivering functionality every 6 months, compared to what was reported on the Dashboard. GAO made recommendations to agencies and OMB to improve the reporting of incremental data on the Dashboard. Most agencies agreed or did not comment on the recommendations. Managing software licenses . Effective management of software licenses can help avoid purchasing too many licenses that result in unused software. In May 2014, GAO reported that better management of licenses was needed to achieve savings. Specifically, only two agencies had comprehensive license inventories. GAO recommended that agencies regularly track and maintain a comprehensive inventory and analyze that data to identify opportunities to reduce costs and better inform decision making. Most agencies generally agreed with the recommendations or had no comments; as of May 2017, 4 agencies had made progress in implementing them. From fiscal years 2010 through 2015, GAO made about 800 recommendations to OMB and federal agencies to address shortcomings in IT acquisitions and operations, and included recommendations to improve the oversight and execution of the data center consolidation initiative, the accuracy and reliability of the Dashboard, incremental development policies, and software license management. Most agencies agreed with GAO's recommendations or had no comments. In addition, in fiscal year 2016, GAO made about 200 new recommendations in this area. GAO will continue to monitor agencies' implementation of these recommendations. |
Mr. Chairman and Members of the Subcommittee: I am pleased to be here today to discuss our observations on the General Services Administration’s (GSA) strategic plan. This plan was prepared for submission to the Office of Management and Budget (OMB) and Congress on September 30, 1997, as required by the Government Performance and Results Act of 1993 (the Results Act). Building on our July 1997 report on GSA’s April draft plan, I will discuss the improvements GSA has made and areas where GSA’s strategic plan can be improved as it evolves over time. GSA’s April 28 draft strategic plan contained all the six components required by the Results Act. However, the draft plan generally lacked clarity, context, descriptive information, and linkages among the components. GSA has since made a number of improvements, and the six components better achieve the purposes of the Act. However, additional improvements would strengthen the September 30 plan as it evolves over time. The September 30 plan continues to have general goals and objectives that seem to be expressed in terms that may be challenging to translate into quantitative analysis. The strategies component is an improvement over the prior version but would benefit from a more detailed discussion of how each goal will actually be accomplished. Although the key external factors component in the September 30 plan is clearer and provides more context, the factors are not clearly linked to the general goals and objectives. The program evaluations component provides a listing of the various program evaluations that GSA used, but it does not include the required schedule of future evaluations. Although the plan does a much better job of setting forth GSA’s statutory authorities, this addition could be further improved by linking the different authorities to either the general goals and objectives or the performance goals. The plan also refers to three related areas—crosscutting issues, major management problems, and data reliability—but the discussion is limited and not as useful as it could be in trying to assess the impact of these factors on meeting and measuring the goals. This is especially true for major management and data reliability problems, which can have a negative impact on measuring progress and achieving the goals. In the 1990s, Congress put in place a statutory framework to address long-standing weaknesses in federal government operations, improve federal management practices, and provide greater accountability for achieving results. This framework included as its essential elements financial management reform legislation, information technology reform legislation, and the Results Act. In enacting this framework, Congress sought to create a more focused, results-oriented management and decisionmaking process within both Congress and the executive branch. These laws seek to improve federal management by responding to a need for accurate, reliable information for congressional and executive branch decisionmaking. This information has been badly lacking in the past, as much of our work has demonstrated. Implemented together, these laws provided a powerful framework for developing fully integrated information about agencies’ missions and strategic priorities, data to show whether or not the goals are achieved, the relationship of information technology investment to the achievement of those goals, and accurate and audited financial information about the costs of achieving mission results. The Results Act focuses on clarifying missions, setting goals, and measuring performance toward achieving those goals. It emphasizes managing for results and pinpointing opportunities for improved performance and increased accountability. Congress intended for the Act to improve the effectiveness of federal programs by fundamentally shifting the focus of management and decisionmaking away from a preoccupation with tasks and services to a broader focus on results of federal programs. strategies) to achieve the goals and objectives and the various resources needed; (4) a description of the relationship between the long-term goals/objectives and the annual performance plans required by the Act; (5) an identification of key factors, external to the agency and beyond its control, that could significantly affect achievement of the strategic goals; and (6) a description of how program evaluations were used to establish and revise strategic goals and a schedule for future program evaluations. We reported in July that the April 28 draft plan included the six components required by the Results Act and the general goals and objectives in the plan reflected GSA’s major statutory responsibilities. However, our analysis showed that the plan could have better met the purposes of the Act and related OMB guidance. Two of the required components—how goals and objectives were to be achieved and program evaluations—needed more descriptive information on how goals and objectives were to be achieved, how program evaluations were used in setting goals, and what the schedule would be for future evaluations to better achieve the purposes of the Act. The four other required components—mission statement, general goals and objectives, key external factors, and relating performance goals to general goals and objectives—were more responsive to the Act but needed greater clarity and context. We also noted that the general goals and objectives and the mission statement in the draft plan did not emphasize economy and efficiency, as a reflection of taxpayers’ interests. Also, the general goals and objectives seem to have been expressed in terms that may be challenging to translate into quantitative or measurable analysis, and there could have been better linkages between the various components of the plan. We also reported that the plan could have been made more useful to GSA, Congress, and other stakeholders by providing a fuller description of statutory authorities and an explicit discussion of crosscutting functions, major management problems, and the adequacy of data and systems. Although the plan reflected the major pieces of legislation that establish GSA’s mission and explained how GSA’s mission is linked to key statutes, we reported that GSA could provide other useful information, such as listing laws that broaden its responsibilities as a central management agency and which are reflected in the goals and objectives. accomplishment of goals and objectives. It also made no mention of whether GSA coordinated the plan with its stakeholders. The plan was also silent on the formidable management problems we have identified over the years—issues that are important because they could have a serious impact on whether GSA can achieve its strategic goals. Finally, the plan made no mention of how data limitations would affect its ability to measure performance and ultimately manage its programs. We reported that consideration of these areas would give GSA a better framework for developing and achieving its goals and help stakeholders better understand GSA’s operating constraints and environment. The September 30 plan reflects a number of the improvements that we suggested in our July 1997 report. The clarity of the September 30 plan is improved and it provides more context, descriptive information, and linkages within and among the six components that are required by the Act. Compared to the April 28 draft, the September 30 plan generally should provide stakeholders with a better understanding of GSA’s overall mission and strategic outlook. Our analysis of the final plan also showed that, in line with our suggestion, GSA placed more emphasis on economy and efficiency in the comprehensive mission statement and general goals and objectives components. The September 30 plan also generally described the operational processes, staff skills, and technology required, as well as the human, information, and other resources needed, to meet the goals and objectives. The strategic plan now contains a listing of program evaluations that GSA used to prepare the plan and a more comprehensive discussion of the major pieces of legislation that serve as a basis for its mission, reflecting additional suggestions we made in our July 1997 report. Furthermore, the September 30 plan’s overall improvement in clarity and context should help decisionmakers and other stakeholders better understand the crosscutting, governmentwide nature of GSA’s operations as a central management agency. The September 30 plan makes some reference to major management problems in the program evaluations component and also addresses the importance of data reliability in the general goals and objectives component. The improvements that GSA has made are a step in the right direction, and the six components better achieve the purposes of the Act. However, we believe that additional improvements, which are described in the following section, would strengthen the strategic plan as it evolves over time. As we discussed in our July 7, 1997, report on the draft plan, the September 30 plan continues to have general goals and objectives that seem to be expressed in terms that may be challenging to translate into quantitative or measurable analysis. This could make it difficult to determine whether they are actually being achieved. For example, the goal to “compete effectively for the federal market” has such objectives as “provide quality products and services at competitive prices and achieve significant savings” and “open GSA to marketplace competition where appropriate to reduce costs to the government and improve customer service.” However, this goal, its related objectives, and the related narrative do not state specifically how progress will be measured, such as the amount of savings GSA intends to achieve or the timetable for opening the GSA marketplace for competition. OMB Circular A-11 specifies that general goals and objectives should be stated in a manner that allows a future assessment to be made of whether the goals are being met. The OMB guidance states that general goals that are quantitative facilitate this determination, but it also recognizes that the goals need not be quantitative and that related performance goals can be used as a basis for future assessments. However, we observed that many of the performance goals that GSA included in the plan also were not expressed in terms that could easily enable quantitative analysis, which could make gauging progress difficult in future assessments. The strategies component—how the goals and objectives will be achieved—described the operational processes, human resources and skills, and information and technology needed to meet the general goals and objectives. This component is an improvement over the prior version we reviewed, and applicable performance goals are listed with each of these factors. Although GSA chose to discuss generally the factors that will affect its ability to achieve its performance goals, we believe that a more detailed discussion of how each goal will actually be accomplished would be more useful to decisionmakers. To illustrate with a specific example, the plan could discuss the approaches that GSA will use to meet the performance goals related to its general goal of promoting responsible asset management using operational processes, human resources and skills, information and technology, and capital/other resources. is achieving its goals and objectives. We also noted that the strategies component does not discuss priorities among the goals and objectives. Such a discussion would be helpful to decisionmakers in determining where to focus priorities in the event of a sudden change in funding or staffing. Finally, GSA deferred to the President’s budget its discussion about capital and other resources. We believe it seems reasonable to include in this component at least some general discussion of how capital and other resources will be used to meet each general goal. Although the external factors component in the September 30 plan is much clearer and provides more context than the draft version we reviewed, the factors are not clearly linked to the general goals and objectives. OMB Circular A-11 states that the plan should include this link, as well as describe how achieving the goals could be affected by the factors. This improvement would allow decisionmakers to better understand how the factors potentially will affect achievement of each general goal and objective. The program evaluations component in the September 30 plan provides a listing of the various program evaluations that GSA indicates were used in developing the plan. However, it still does not include a schedule of future evaluations. Instead, the plan states that the schedule for future program evaluations is under development and that GSA intends to use the remainder of the consultation process to obtain input from Congress and stakeholders concerning the issues that should be studied on a priority basis. However, OMB Circular A-11 indicates that the schedule should have been completed and included in the September 30 plan, together with an outline of the general methodology to be used and a discussion of the particular issues to be addressed. Although the plan does a much better job of setting forth GSA’s statutory authorities in the attachment, this description could be further improved if the different statutory authorities discussed therein were linked with either the general goals and objectives or the performance goals included in the plan. Further, the plan only makes limited reference to the other important areas we identified in our July 1997 report—crosscutting issues, major management problems, and data reliability. The plan’s improved clarity and context should help decisionmakers understand the crosscutting issues that affect GSA as a central management agency. However, explicit discussion of these issues is limited, and the September 30 plan makes no reference to the extent to which GSA coordinated with stakeholders. The September 30 plan references major management problems in the program evaluations component, but it does not explicitly discuss these problems or identify which problems could have an adverse impact on meeting the general goals and objectives. Our work has shown over the years that these types of problems have significantly hampered GSA’s and its stakeholder agencies’ abilities to accomplish their missions. For example, the plan could address how GSA will attempt to ensure that its information systems meet computer security requirements or how GSA plans to address the year 2000 problem in its computer hardware and software systems. The plan does reference data reliability in the general goals and objectives component. However, the discussion of data reliability, which is so critical for measuring progress and results, is limited and not as useful as it could be in attempting to assess the impact that data problems could have on meeting the general goals and objectives. We continue to believe that greater emphasis on how GSA plans to resolve management problems and on the importance of data reliability could improve the plan. Mr. Chairman, this concludes my prepared statement. I would be pleased to answer any questions. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO discussed its observations on the General Services Administration's (GSA) September 30, 1997, strategic plan. GAO noted that: (1) GSA's April 1997 draft strategic plan contained all six components required by the Government Performance and Results Act; (2) however, the draft plan generally lacked clarity, context, descriptive information, and linkages among the components; (3) GSA has since made a number of improvements, and the six components now better achieve the purposes of the act; (4) however, additional improvements would strengthen the September 30 plan as it evolves over time; (5) the September 30 plan continues to have general goals and objectives that seem to be expressed in terms that may be challenging to translate into quantitative analysis; (6) the strategies component is an improvement over the prior version but would benefit from a more detailed discussion of how each goal will actually be accomplished; (7) although the external factors in the September 30 plan are clearer and provide more context, the factors are not clearly linked to the general goals and objectives; (8) the program evaluations component provides a listing of the various program evaluations that GSA used, but it does not include a required schedule of future evaluations; (9) although the plan does a much better job of setting forth GSA's statutory authorities, this addition could be further improved by linking the different authorities to either the general goals and objectives or the performance goals; (10) the plan also refers to three related areas--crosscutting issues, major management problems, and data reliability--but the discussion is limited and not as useful as it could be in articulating how these issues might affect successful accomplishment of goals and objectives; and (11) this is especially true for major management and data reliability problems, which can have a negative impact on measuring progress and achieving the goals. |
Medicare’s home health benefit enables certain beneficiaries with post- acute-care needs (such as recovery from joint replacement) and chronic conditions (such as congestive heart failure) to receive care in their homes. To qualify for home health care, beneficiaries must be confined to their residence (“homebound”); require part-time or intermittent skilled nursing, physical therapy, or speech therapy; be under the care of a physician; and have the services furnished under a plan of care prescribed and periodically reviewed by a physician. If these conditions are met, Medicare will pay for the following types of visits: skilled nursing; physical, occupational, and speech therapy; medical social service; and home health aide. As long as beneficiaries continue to remain eligible for home health services, they may receive an unlimited number of visits. Beneficiaries are not liable for any out-of-pocket costs for this benefit. Medicare home health payments grew at an average annual rate of 25 percent between 1990 and 1997, more than three times the rate of spending growth for the entire Medicare program. The growth in spending was attributable primarily to increases in the number of visits provided and not in the payment per visit. The number of Medicare beneficiaries receiving home health almost doubled during that period, from 57 to 109 beneficiaries per 1,000. At the same time, the average number of visits provided per home health user grew from 36 to 73 visits. The rapid growth in home health use was due, in part, to the cost-based payment method. Under the cost-based system, HHAs were paid their costs up to a per-visit limit for each visit provided. This method, at a time when there was little program oversight, offered few incentives to provide visits efficiently or only when needed. By 1997, home health utilization—as measured by the number of home health users per 1,000 Medicare beneficiaries and the number of visits provided—varied widely across geographic regions. For example, 48 Medicare beneficiaries per 1,000 in Hawaii received home health care in 1997. In the same year, more than 157 beneficiaries per 1,000 received home health care in Louisiana. Meanwhile, Medicare home health users in Washington received an average of 32 visits, compared to an average of 161 visits per user in Louisiana. This wide variation in use persisted even after controlling for patient diagnosis. This variability is partly due to the lack of standards for necessary or appropriate care. Furthermore, even the most basic unit of service—the visit—was not well defined in terms of either the amount of time spent with a patient or the type of services provided. To constrain Medicare home health spending growth, BBA required HCFA to replace Medicare’s cost-based, per-visit payment method with a PPS by fiscal year 2000. Until PPS could be implemented, BBA imposed spending controls under the IPS: For 3 years beginning October 1, 1997, the IPS incorporated tighter per-visit cost limits than had previously been in place and subjected each HHA to an annual Medicare revenue cap, which was the product of an HHA-specific, per-beneficiary amount and the number of beneficiaries that the HHA served. Under the PPS, an HHA receives a single payment for all items and services furnished during each 60-day episode of care. The payment rate is based on the national average cost of providing care in 1997, not an HHA’s actual costs. Because the payment is divorced from an HHA’s cost of delivering care, an HHA that delivers care for less than the payment amount can profit; conversely, an HHA will lose financially if its service costs are higher than the payment. To account for differences in beneficiary care needs, PPS episode payments are adjusted from a base rate (which was $2,115 in fiscal year 2001). These adjustments are based on a classification system that groups home health beneficiaries into 80 payment groups. The payment for a beneficiary in the most intensive payment group is approximately five times greater than the payment for a beneficiary in the least intensive group. In fiscal year 2001, episode payments ranged from $1,114 to $5,947. We have reported the strong financial incentives under the home health PPS to reduce the costs of providing an episode of care. HHAs can do this by reducing unnecessary or excessive visits, delivering care more efficiently, or underserving beneficiaries. We expressed concern that it may be hard to detect when the latter occurs. The lack of standards for necessary or appropriate care makes it difficult to review care and take steps to ensure that needed services are being delivered. We also said that the PPS could lead to substantial overpayments to some HHAs relative to the level of services being provided. Further, we noted industry concerns about the ability of some HHAs to respond to PPS incentives to reduce their costs and about inadequacies in the method used to adjust payments to account for differences in beneficiary care needs. As a result of these concerns, we recommended that risk sharing be incorporated into the PPS design. Risk sharing would limit the total losses and gains an HHA could experience over a period of time for treating beneficiaries by establishing formulas to share losses or gains with the Medicare program. This would involve a settlement process in which an HHA’s actual costs of delivering care over the relevant period would be compared to its actual payments. Such an approach would simultaneously protect beneficiaries against underservice, the Medicare program from overpaying for services, and HHAs serving beneficiaries with greater than average needs when the costs are not accounted for in the payment adjustments. HCFA did not agree with our recommendation, stating that the PPS design and payment adjustments would address our concerns and that risk sharing would be difficult to implement. We subsequently suggested that the Congress consider requiring HCFA to implement risk sharing with the PPS. The average episode payment HHAs received to provide an episode of care in the first 6 months of 2001 was about 35 percent higher than the average estimated cost of providing that care. The average episode payment, accounting for the mix of beneficiaries treated in the first 6 months of 2001, was $2,691. (See table 1.) During this period, we estimated that the cost of providing an episode of care was $1,997 after adjusting for the mix of services provided by agencies and changes in the average time spent for each type of visit since the introduction of the PPS. This large difference between the average episode payment and estimated cost is due to three factors. First, the PPS episode payment amount was calculated on the assumption that about 32 visits would be provided during an average episode, although immediately prior to PPS implementation only about 29 visits per episode were provided. Second, HHAs have further lowered their costs since PPS by providing, on average, only about 22 visits per episode during the first half of 2001. Third, HHA payments have increased because a larger proportion of home health users have been categorized into higher payment groups. While the PPS adjusts payment rates to account for expected variation in costs due to patient care needs, the relationship between average payments and average estimated costs masks wider differences between payments and estimated costs across the 80 home health payment groups. The relationship between payments and estimated costs for the 10 payment groups that account for almost half of home health episodes ranged from 72 percent above the estimated cost to 4 percent below in the first 6 months of 2001. (See table 2.) For the five payment groups with the lowest payments relative to estimated costs, which accounted for 8 percent of all episodes, the payment ranged from about 9 percent below to about equal the average estimated cost of services provided. The payment was greater than the average estimated cost for the remaining groups. For any HHA, the relationship between Medicare payments and the costs of providing care will likely vary from the averages we report here. The PPS was designed to provide adequate payments to HHAs that operate efficiently and to provide incentives for HHAs to become more efficient. But certain HHAs may have costs higher than payments if they face extraordinary costs not accounted for by the PPS payment groups. The Medicare program is paying HHAs on average considerably more than the estimated cost of care beneficiaries are receiving. Consequently, implementation of the BBA-mandated 15 percent payment reduction, which would lower fiscal year 2003 PPS payments by 7 percent, should not affect HHAs’ ability to serve Medicare beneficiaries. This payment reduction would move the Medicare program closer to becoming a prudent purchaser of home health care, but the reduction by itself is not sufficient. A single payment to cover all services provided during a 60-day episode of care, combined with the lack of standards for what constitutes necessary or appropriate home health care, leaves beneficiaries vulnerable to underservice, Medicare vulnerable to future overpayments, and HHAs with a disproportionate number of beneficiaries with extensive needs vulnerable to underpayments. Implementing the 15 percent reduction would not lessen these vulnerabilities. This is why we have previously recommended that the PPS include risk sharing to simultaneously protect beneficiaries, the Medicare program, and HHAs. The Congress should consider making no change in the requirement for a reduction in Medicare home health payments. We continue to urge the Congress to require CMS to incorporate risk sharing into the PPS design. In written comments on a draft of this report, CMS stated that our findings are consistent with its preliminary analysis of data for the first year of the PPS. It noted that cost report data, which are not yet available for the first year of the PPS, would be required to determine the costs of home health services under the PPS with certainty. CMS reiterated its concerns about implementing risk sharing as a part of the PPS. It believes that risk sharing would undermine the main benefit of PPS, which is payments that are timely and predictable. Further, CMS stated its belief that the outlier payment policy under the home health PPS and planned monitoring activities should mitigate our concern that some HHAs may be vulnerable to underpayments. Finally, CMS stated that risk sharing is administratively difficult. Although cost report data would more accurately reflect an HHA’s costs, our episode cost estimates build on historic visit costs, adjusted for inflation and changes in visit time, and reflect actual service use, a major determinate of episode costs. We believe that the new evidence we present on the wide disparity between payments and estimated costs on average and across payment groups demonstrates the need for and the value of risk sharing in conjunction with the home health PPS. Risk sharing would not remove the incentives under the PPS for HHAs to provide care efficiently, because they would continue to benefit financially when their costs are below their payments and lose financially when their costs are above their payments. Yet, risk sharing would mitigate extreme gains and losses under the PPS. While the monitoring activities and refinements that CMS discusses such as revisions to the payment groups could mitigate extreme gains or losses, it could be some time until they are implemented. Furthermore, outlier payments, which account for less than 3 percent of payments, are not by themselves sufficient to protect vulnerable HHAs that have higher than average costs across a number of patients, nor do they protect the Medicare program from excessive spending. We believe that CMS could overcome any administrative difficulties in implementing risk sharing. CMS incorporated a risk-sharing arrangement in its demonstration project on the home health PPS while ensuring predictable and timely payments. We note that CMS has considerable experience in adjusting prospective payments to providers based on expectations for a provider’s costs in the coming year, most recently in implementing the hospital outpatient PPS, which has a provision to protect hospitals from losses. CMS’ comments are included as appendix II. We received oral comments on a draft of this report from representatives of three home health care associations—American Association for Homecare (AAHomecare), National Association for Home Care (NAHC), and Visiting Nurse Associations of America (VNAA). These organizations disagreed with our conclusions. All three associations expressed concern about the effect of a potential payment reduction on the industry’s stability and, in particular, its ability to care for medically complex patients. The associations said it was too early in the experience of the PPS to accurately measure home health care use, visit costs, episode costs, or industry profit margins. VNAA stated that our cost estimates do not reflect current fixed costs under the PPS. NAHC raised questions about the timeliness of payments if risk sharing is a part of the home health PPS. The associations said that more information was needed on how low- utilization episodes, partial episodes, and outlier payments would affect the relationship between average episode costs and payments to HHAs. Our results are consistent with CMS’ analysis of a full year of experience under the PPS. Our analysis of 1.48 million episodes did not consider the payment reduction for partial episodes, payment enhancement for outliers, or variable payment adjustments for a significant change in a beneficiary’s condition. When calculating episode payments and estimated costs we treated these as full episodes. The impact of these payment adjustments on average episode payments is likely to be minimal because they are partially offsetting and apply to less than 8 percent of episodes. Whether the visit costs for these types of episodes is different from the average visit costs is not known. We excluded low-utilization episodes from our analysis because they are not paid an episode rate. HHA visits per user have been dropping since 1997, allowing ample time for HHAs to bring their fixed costs in line with current use patterns. The magnitude of the difference between payments and estimated costs provides compelling evidence that the legislated reduction would not destabilize the home health industry. Further, risk sharing if implemented would moderate any negative effects on the HHAs that may incur costs that are higher than these estimates including when HHAs treat medically complex patients. We are sending copies of this report to the Administrator of CMS. We will also make copies available to others upon request. If you or your staff have any questions, please call me at (202) 512-7114. Other contacts and staff who contributed to this report are listed in appendix III. We conducted our analyses using Medicare provider, claims, and beneficiary files for calendar years 2000 and 2001. We included only those providers that were listed as active in each year. For episodes ending on January 1, 2001 through June 30, 2001, we used all final bills from the home health Standard Analytic File (SAF) for 2001 that were available as of January 24, 2002. Our file of 1.48 million episodes, which excludes all low-utilization episodes, does not include any claims for the first 6 months of 2001 submitted after January 24, 2002. For 2000, we used all final bills ending on January 1, 2000 through June 30, 2000. To compute our estimate of average episode costs, we used HCFA’s per- visit cost estimates that were used to establish the PPS episode rates and that were calculated from the sample of fiscal year 1997 audited costs reports. The per-visit costs, which include all costs of home health services covered and paid for on a reasonable cost basis, were inflated to 2001 cost levels using the market basket index for home health services. Then we adjusted the per-visit costs to account for the change in the time spent for each type of visit in 2001 compared to 2000. We estimated episode costs by multiplying the adjusted per-visit cost for each type of visit by the average mix of visits provided in each payment group in a 2001 episode. We also added an additional amount for the costs of other services not included in the per-visit costs. Our methodology assumes that the relationship between direct patient care costs and overhead costs has remained the same over time and therefore that administrative costs have not increased or decreased since the PPS. We calculated the average payment as the payment amount for each of the 80 payment groups weighted by the proportion of all episodes in 2001 provided within each payment group. We interviewed CMS officials and industry representatives from the American Association for Homecare, National Association for Home Care, Gentiva Health Services, Rocky Mountain Health Care, and the Visiting Nurse Associations of America regarding the changes in provider practices since the implementation of PPS. In addition to those named above, Leslie V. Gordon, Dan Lee, Carolyn Manuel-Barkin, Lynn Nonnemaker, and Paul M. Thomas made key contributions to this report. | The Balanced Budget Act of 1997 significantly changed Medicare's home health care payments to home health agencies (HHAs). Under a prospective payment system (PPS), HHAs are paid a fixed amount, adjusted for beneficiary care needs, for providing up to 60 days of care---termed a "home health episode." The act also imposed new interim payment limits to moderate spending until the PPS could be implemented. Although PPS was designed to lower Medicare spending below what it was under the interim system, GAO found that Medicare's payments for full home health care episodes were 35 percent higher than estimated in the first six months of 2001. These disparities indicate that Medicare's PPS overpays for services actually provided, although some HHAs facing extraordinary costs not accounted for by the payment system may be financially disadvantaged. |
FPS is responsible for protecting federal employees and visitors in approximately 9,600 federal facilities under the custody and control of GSA. The level of security FPS provides at each of the facilities (including whether guards are deployed) varies depending on the building’s facility security level. To fund its operations, FPS charges fees for its security services to federal tenant agencies in GSA-controlled facilities. For fiscal year 2013, FPS expects to receive $1.3 billion in fees. FPS has about 1,200 full-time employees and about 13,500 contract security guards deployed at approximately 5,650 (generally level III and IV facilities) of GSA’s 9,600 facilities. Figure 1 shows the location of FPS’s 11 regions and the approximate number of guards serving under contracts in each of these regions. FPS’s contract guard program is the most visible component of the agency’s operations, and the agency relies on its guards to be its “eyes and ears” while performing their duties. Contract guards are responsible for controlling access to facilities; conducting screening at access points to prevent the introduction of prohibited items, such as weapons and explosives; enforcing property rules and regulations; detecting and reporting criminal acts; and responding to emergency situations involving facility safety and security. In general, guards may only detain, not arrest, individuals, and guards’ authorities typically do not extend beyond the facility. However some guards may have arrest authority under conditions set forth by the individual states. According to FPS’s contract for guard service, its private-sector contract guard companies have primary responsibility for training and ensuring that guards have met certification and qualification requirements; however, FPS is ultimately responsible for oversight of the guards. FPS relies on its Contracting Officer Representatives (COR) and inspectors located in its 11 regions to inspect guard posts and verify that training, certifications, and time cards are accurate, among other responsibilities. CORs are individuals appointed by the contracting officer to assist in the monitoring or administration of a contract including monitoring contractor performance, receiving reports and other documentation, performing inspections, and maintaining contact with both the contract guard company and the contracting officer.for providing and maintaining all guard services as described in the contract statement of work, including management, supervision, training, equipment, supplies, and licensing. Before guards are assigned to a post or an area of responsibility at a federal facility, FPS requires that they all have contractor employee fitness determinations (the employee’s fitness to work on behalf of the government based on character and conduct) and complete approximately 120 hours of training provided by the contractor and FPS, including basic training, firearms training, and screener (X-ray and magnetometer) training. Guards must also pass an FPS-administered written examination and possess the necessary certificates, licenses, and permits as required by the contract. Additionally, FPS requires its guards to complete 40 hours of refresher training every 3 years. Some states and localities require that guards obtain additional training and certifications. See table 1 for a detailed list of FPS’s guard training, certification, and qualification requirements. We found similarities in the ways that FPS and six federal agencies we reviewed ensure that contract guards have received required training, certifications, and qualifications. Similar to FPS, each of the six agencies we examined—DOE, NASA, PFPA, State, the Kennedy Center, and the Holocaust Museum—depend largely on the contract guard companies to ensure guards are trained, certified, and qualified. They also depend on the guard companies to document compliance with contract requirements. All six agencies and FPS require basic, firearms, and screener (x-ray and magnetometer) training for their armed guards. In addition, FPS and five of the six agencies we reviewed require refresher training. FPS continues to experience difficulty providing required screener (x-ray and magnetometer equipment) training to all guards. In 2009 and 2010, we reported that FPS had not provided screener training to 1,500 contract guards in one FPS region. In response to our reports, FPS stated that it planned to implement a program to train its inspectors to provide screener training to all of its contract guards. Under this program, FPS planned to first provide x-ray and magnetometer training to its inspectors who would subsequently be responsible for training the guards. However, FPS continues to have guards deployed to federal facilities without this training. As noted in table 1, FPS requires all guards to receive 8 hours of initial screener training provided by FPS. Screener training is important because guards control access points at federal facilities and thus must be able to properly operate x-ray and magnetometer machines and understand their results. However, 3 years after our 2010 report, guards are deployed to federal facilities who have never received this training. For example, an official at one contract guard company stated that 133 of its approximately 350 guards (about 38 percent) on three separate FPS contracts (awarded in 2009) have never received their initial x-ray and magnetometer training from FPS. The official stated that some of these guards are working at screening posts without having received the training. Further, officials at another guard company in a different FPS region stated that, according to their records, 78 of 295 guards (about 26 percent) deployed under their contract have never received FPS’s x-ray and magnetometer training. These officials stated that FPS’s regional officials were informed of the problem, but allowed guards to continue to work under this contract, despite not having completed required training. Because FPS is responsible for this training, according to guard company officials, no action was taken against the company. In May 2013, FPS headquarters officials stated that they were unaware of any regions in which guards had not received screener training. In July 2013, according to FPS officials, the agency began designing a “train-the-trainer” pilot program with four guard companies. Through this pilot program, contract guard company instructors, in addition to FPS inspectors, will be certified to provide screener training to guards. FPS officials stated that they plan to implement the pilot program in the first quarter of 2014. According to FPS officials, once implemented, FPS’s train-the-trainer program should increase the number of certified instructors capable of providing screener training nationwide. If this program is fully implemented, FPS screener training could be provided largely by the guard companies. This is the method by which four of the six agencies we spoke with provide their guards with screener training. In addition, officials from 13 of the 31 guard companies that we interviewed stated that responsibility for x-ray and magnetometer training should be shifted to the guard companies to alleviate scheduling problems, while officials from 7 companies stated that FPS should retain this responsibility. The remaining 11 guard companies did not state an opinion on this issue. FPS’s train-the-trainer program could provide resources to address the challenges it faces in providing screener training to guards. However, the program is in its beginning stages and there are still guards deployed to federal facilities who have not received required screener training. Screener training is essential to helping prevent unauthorized individuals and items from entering federal facilities. Thus, it is critical that FPS immediately provide this training to those guards who have not received it. According to FPS officials, the agency requires its guards to receive training on how to respond to an active-shooter scenario, but we found that some guards have not received it. According to DHS, an active shooter is an individual killing or attempting to kill people in a confined and populated area. Since June 2009 there have been several incidents involving active-shooters at government facilities. For instance, in 2010 an active-shooter opened fire in the Lloyd D. George Federal Courthouse in Las Vegas, Nevada, killing a security officer and wounding a deputy U.S. Marshal. According to FPS officials, since 2010, it has provided training on how guards should respond during an active-shooter incident to guards as part of the 8-hour FPS-provided orientation training guards receive. FPS officials were not able to specify how much time is devoted to this training, but said that it is a small portion of the 2-hour special situations In addition, officials stated that guards hired before 2010 should training.have received this information during guard-company-provided training on the guards’ post orders (which outline the duties and responsibilities associated with each guard post and include information on responding to an active-shooter situation) during basic and refresher training. However, when we asked contract guard company officials if their guards had received training on how guards should respond during active- shooter incidents, responses varied. For example, of the 16 contract guard companies we interviewed about this topic: eight contract guard company officials stated that their guards have received active-shooter scenario training during orientation, five guard company officials stated that FPS has not provided active- shooter scenario training to their guards, and three guard companies stated that FPS had not provided active- shooter scenario training to their guards during the FPS-provided orientation training, but that the topic was covered in one of the following ways: during guard company-provided basic training or refresher training, FPS provided on-the-job instruction on the topic during post FPS provided a link to an active-shooter training video, which the company shows its guards. The six agencies we reviewed—State, the Holocaust Museum, NASA, PFPA, the Kennedy Center, and DOE—also recognize this threat and five of them require active-shooter response training for their contract guards. According to officials at DOE, the agency is in the process of requiring guards to complete active-shooter response training to ensure they are capable of addressing this threat and protecting facility occupants. Similarly, Holocaust Museum officials stated that they require this training because current trends in law enforcement warrant active-shooter response training for guards. In May 2013, an FPS official stated that the agency is collaborating with its guard companies to develop a standardized national lesson plan for guards and revising the Security Guard Information Manual (SGIM). FPS officials stated that the lesson plan being developed is meant to standardize the training guards receive. However, according to the official, FPS has not yet decided whether the national lesson plan will specify countermeasures necessary to mitigate threats from active shooters. FPS does not have a timeline for developing or implementing a national lesson plan for guards. Until it develops one, some guards may continue to go without training on how guards should respond to incidents at federal facilities involving an active shooter. FPS requires some contract guard company instructor certifications, but does not require guard company instructors to be certified to teach basic or refresher training or have any training in basic instructional techniques. According to ISC guidance, training is a critical component of developing a well-qualified guard force and all training should be done with a certified instructor or training organization. Similarly, Federal Law Enforcement Training Accreditation Board (FLETA)training programs have an instructor development course and review process to ensure that instructors provide consistent, quality instruction. FPS requires that guard instructors be certified to provide training in CPR, first aid, AED, and firearms and have a minimum of 2 years of law enforcement, military, or security training experience. However, FPS has no certification requirements for instructors teaching the guards’ basic and refresher training, nor does FPS require instructors to be knowledgeable in instructional techniques. Basic training, which represents 64 hours of the initial 120 hours of training that guards receive, and the 40-hour refresher class cover topics included in the SGIM, such as access control and crime detection and response. In contrast to FPS, three of the six selected agencies that we reviewed (NASA, DOE, and the Holocaust Museum) require guard instructors to attend instructor training or to be certified by the agency. For example, NASA requires contract guard company instructors to be certified by a NASA training academy. NASA stated that instructor certification requirements have reduced legal liabilities, ensured standardization of training, and led to greater efficiency of its training programs throughout the agency. Under NASA’s instructor certification program, instructors must meet the following requirements, among others: completed training from the Federal Law Enforcement Training Center (FLETC); 2-week internship as a student instructor to observe and work with an established instructor, including an evaluation; physical fitness requirements; re-evaluation every 2 years to ensure instructors are effective and follow required lesson plans; and annual workshop for instructors on curriculum development. Similarly, DOE requires that in addition to specific certifications for the level of training they provide, instructors must complete a basic instructor training course and be evaluated for competency at least once every 36 months. According to some of FPS’s guard companies, the absence of an instructor certification requirement has affected the quality of training provided to some guards. For example, 6 of FPS’s 31 contract guard companies stated that they have experienced problems related to training quality when taking over a contract from a previous guard company and employing guards who had worked for the previous company. The companies stated that they either retrained or did not hire guards who they believed had been inadequately trained by the previous company. In these situations, costs may be passed on to FPS via increased rates for guard services to account for the increased training costs to guard companies. Four of the 31 guard companies stated that they already have additional requirements or training for instructors. However, such additional requirements and training are on a company-by-company basis and do not necessarily conform to any standards. Sixteen of the guard companies and officials from FLETA and CALEA stated that FPS should standardize instructor training and certification requirements or require FPS certification for guard instructors. Such standardization would help ensure quality and consistency in the training received by guards providing protective services across GSA’s federal buildings. FPS officials stated that FPS reviews each instructor’s resume to ensure that instructors have the minimum qualifications necessary to provide guard instruction. Some contract guard files we reviewed did not contain all required documentation. We reviewed 276 randomly selected (non-generalizeable) guard files maintained by 11 of the 31 guard companies we interviewed and found that 212 files (77 percent) contained the required training and certification documentation, but 64 files (23 percent) were missing one or more required documents. See table 2 for information on the results of our review. These 64 files were maintained by 9 of the 11 companies. According to FPS’s policies and contracts for guard service, each contract guard company must maintain a file for each guard to document that all FPS training, certification, and qualification requirements have been met and are current. We examined the files against the required training, certification, and qualification documentation listed by FPS on the forms it uses to conduct its monthly file reviews. As shown in table 2, the 64 guard files were missing 117 total documents. For example: Three files were missing documentation of basic training, and 15 were missing documentation of refresher training, both of which cover the guards’ roles and responsibilities and duties such as access control. Five files were missing documentation of screener training, which as mentioned above, is meant to prepare guards to prevent prohibited items from being brought into federal facilities. Seventeen files were missing documentation of initial weapons training, which indicates guards have passed the 40-hour weapons training, including 32 hours of firearms training. One file was missing the form that certifies that a guard has not been convicted of a crime of domestic violence. In addition to the 117 missing documents, there was no indication that FPS had monitored firearms qualifications in 68 of the 276 guard files reviewed. The other 208 files had a current firearms qualification form with an indication (such as initials or a signature) that FPS witnessed the qualification. The FPS Protective Security Officer (PSO) File Review Form lists documentation requirements as “Firearms Qualifications Witnessed by an FPS Employee,” but is not clear regarding whether documentation of the FPS witness is required in the file. Although FPS has taken some steps to address its challenges in this area, our previous recommendations are a guide to furthering its efforts. For example, we recommended that FPS rigorously and consistently monitor contract guard companies’ performance and step up enforcement against guard companies that are not complying with the terms of the contract. Although FPS agreed with this recommendation, it has yet to implement it. According to FPS officials, it plans to address this recommendation in the near future. DHS agreed with our 2010 and 2012 recommendations to develop a comprehensive and reliable system for contract guard oversight, but it still does not have such a system. Without a comprehensive guard management system, FPS has no independent means of ensuring that its contract guard companies have met contract requirements, such as providing qualified guards to federal facilities. According to FPS officials, it plans to address this recommendation in the near future. GAO’s Standards for Internal Control in the Federal Government also states that program managers need access to data on agency operations to determine whether they are meeting goals for the effective and efficient use of resources. The standards state that such information should be captured and distributed in a form that permits officials to perform their duties efficiently. In the absence of a comprehensive guard-data-management system, FPS requires its guard companies to maintain files containing guard training and certification information and to submit a monthly report with this information to their CORs. FPS headquarters officials stated that the monthly reports are primarily to ensure that regional managers have access to training and certification information, although there are no requirements for regional officials to use or analyze the monthly reports. The officials stated that regions are occasionally asked to supply these reports to FPS headquarters as a check to ensure regions and guard companies are sharing this information, but that headquarters officials do not analyze the data. Although FPS does not have a system to track guard data, 13 of FPS’s 31 guard companies maintain training, certification, and qualification data in either proprietary or commercially available software programs with various management capabilities. For example, one system used by multiple companies tracks the training and certification status of each guard and prevents the company from scheduling the guard to work if the guard is not in compliance with requirements. Virginia’s Department of Criminal Justice Services (DCJS) has a database system that also allows training academies, guards, and guard companies to upload training and certification documentation so that DCJS can track the training and certification status of guards. According to industry stakeholders and contract guard company officials, a comprehensive guard management system could: provide FPS direct access for updating guard training, certification, and qualification data while performing post inspections and other oversight activities such as file reviews; enable FPS and guard company officials to more easily develop reports and identify trends in data to recognize areas that need attention; store training, certification, and qualification documentation, that could reduce the need to obtain documentation from a prior guard company when a new company takes over a contract; and help identify guards working under more than one FPS contract and verify that they do not work more than the maximum of 12 hours in one day. FPS’s monthly reviews of contract guard companies’ guard files are its primary management control for ensuring that the companies are complying with contractual requirements for guards’ training, certification, and qualifications. FPS’s directive for monthly file reviews requires, for example, that: Ten percent of the guard files for each contract are to be selected randomly for the monthly review. Selected files should be compared to the data in the reports provided to FPS by the contract guard company that month. FPS reviewers must note any deficiencies in which the file documentation and dates do not match the data included in the monthly report and promptly notify the guard company, COR, and FPS regional program manager of the deficiencies. If there are deficiencies in 40 percent or more of the reviewed files, the region must immediately initiate an audit of 100 percent of the company’s guard files. Results should be recorded in FPS’s Administrative Audit Form and individual Protective Security Officer File Review Forms. An effort should be made to exclude files that have been reviewed within the last 6 months from the selection process. FPS’s directive on its monthly file reviews does not include specific information about the importance of randomly selecting guard files and ensuring contract guard company personnel do not know which files will be reviewed. In the absence of specific guidance regarding how files are to be selected, the four regions we visited varied in how they conducted the monthly file reviews. For example, three of the four regions we visited told us that they review randomly selected files either at the guard company’s office or the guard company gives them electronic access to the files for review. In contrast, officials in the fourth FPS region stated that they submit a list of the selected guard files to the guard company 24 to 48 hours before the file review and request that the files be delivered either electronically or in hard copy to the regional office. As such, contract guard company officials in that region stated that they can review the selected files to ensure that they comply with requirements prior to delivering them to FPS. FPS headquarters officials stated that this indicates that guard company officials are performing due diligence to ensure the file is up to date. However, this practice decreases the utility of randomly selecting files for review and reduces the ability of FPS reviewers to accurately assess the guard company’s ongoing ability to keep all of its guard files up to date. Additionally, officials at a contract guard company in another FPS region stated that the COR occasionally asks the guard company to select the files for review and bring them to the regional office. FPS stated that this is not standard practice. Allowing contract guard company officials to select files for review by FPS could result in selection bias and affect the results of FPS’s review. FPS headquarters officials stated that monthly file review results are reported to headquarters and that the data are combined into a spreadsheet, summarizing the number of deficiencies by contract, region, and nationally. Officials stated that these data are used to identify possible trends in vendor documentation and to determine if corrective actions need to be taken at the regional level. However, if file review results are affected by selection bias or by guard company actions to alter the contents of the files selected for review, these data may not lead to an accurate understanding of trends or the need for corrective action. The Government Performance and Results Act Modernization Act of 2010 requires agencies to develop an approach to validation and verification in order to assess the reliability of performance data. However, FPS’s directive regarding monthly file reviews, discussed above, does not include requirements for reviewing and verifying the results of the file reviews. From March 2012 through March 2013, FPS reviewed more than 23,000 guard files as part of its monthly review process. FPS found that a majority of the guard files had the required documentation but more than 800 (about 3 percent) did not. FPS’s file reviews for that period showed files missing, for example, documentation of screener training, initial weapons training, CPR certification, and firearms qualifications. However, without an approach to reviewing and verifying results, FPS is not able to use these results to accurately assess the performance of its contract guard companies in complying with training and certification requirements. As part of its monthly file reviews for November 2012 through March 2013, FPS reviewed some of the same guard files we examined, but our results differed substantially from what FPS found. Specifically, we compared the results of FPS’s file reviews for the 11 contracts for which we conducted file reviews; we found that 29 of the 276 files we reviewed had also been reviewed by FPS. FPS’s review and our examination of each file occurred in the same month. For each of the 29 files, FPS did not identify any missing documentation. In contrast, we found that 6 of the 29 files did not have the required training and certification documentation (and some were missing more than one required document). In 4 of the 6 guard files, FPS’s review indicated that required documentation was present, but we were not able to find documentation of training and certification, such as initial weapons training, DHS orientation, and pre- employment drug screenings. We also identified files with expired documentation. For example, 2 of the 6 files had expired refresher- training documentation and another guard file had expired firearms qualification documentation. Since we used FPS’s file review checklist to conduct our file review, it is unclear why the results differed. FPS officials were unsure about the reasons for this, but stated that human error and contract requirements that differ from the requirements listed on administrative audit forms may have been factors. Additionally, differing results may be due to differences in the type of documentation accepted by GAO and FPS. For example, in our review of FPS monthly file review records for one contract, we identified 2 files for which, according to the PSO file review form, the FPS reviewer accepted documentation of CPR and AED training that we did not accept as valid. While FPS guard contracts require guard files to contain a copy of the CPR and AED certification card, the FPS reviewer accepted a roster of individuals who attended the training. However, the roster did not indicate whether attendees had passed the course or been officially certified and was not signed by an instructor. FPS can take action against guard companies if it determines that a contract guard company has not complied with contractual requirements, but it may not have accurate information to do so. FPS’s contracts for guard services state that if guard companies do not comply with contract requirements (e.g., guard training, certification, and qualification requirements), FPS may require the contractor to take actions to ensure compliance in the future and also may reduce the contract price to reflect the reduced value of the service provided. Determining the extent to which FPS took actions against guard companies for not complying with guard training and certification requirements was not within the scope of our engagement. However, the results of our comparison of FPS’s guard file reviews to our reviews raises questions about whether FPS has effective management controls in place to identify areas in which guard companies have not complied with requirements. FPS continues to lack the management controls to ensure that its approximately 13,500 contract guards have the required training, certification, and qualifications, which are central to effectively protecting employees and visitors in federal facilities. FPS agreed with the recommendations in our 2010 and 2012 reports. We recommended, among other things, that FPS develop and implement a comprehensive system for guard oversight. Without such a system, among other things, FPS has no independent means of ensuring that its 13,500 guards deployed to federal facilities are properly trained and qualified. As such, we strongly encourage FPS to continue addressing the challenges we identified in our prior work and to be more proactive in managing its contract guard workforce. Although FPS has taken steps to address some of our prior recommendations, we found that FPS still has challenges providing screener training to some guards. Consequently, some guards deployed to federal facilities may be using x-ray and magnetometer equipment that they are not qualified to use. This raises questions about the capability of some guards to screen access control points at federal facilities─one of their primary responsibilities. According to FPS officials, the agency has recently decided to make changes to its guard program, including developing a national lesson plan. We agree with this decision, given the problems that we have identified. A national lesson plan could help FPS standardize and ensure consistency in its training efforts. For example, without ensuring that all guards receive training on how to respond to incidents at federal facilities involving an active shooter, FPS has limited assurance that its guards are prepared for this threat. Similarly, the lack of certification requirements for instructors who teach basic and refresher training may ultimately affect guards’ ability to perform their duties. Finally, inconsistencies in how FPS regional officials conduct monthly file reviews (which are FPS’s primary management control for ensuring compliance with the guard contract requirements) indicate that the current guidance for monthly file reviews is insufficient to ensure that, for instance, guard companies do not have the opportunity to select files for review and thus affect the results of the file reviews. Further, our work raises questions about the reliability and quality of FPS’s monthly file reviews. These findings are of particular concern given that FPS continues to pay guard companies over half a billion dollars annually to provide qualified guards yet it appears that some guards have been deployed to federal facilities without meeting all of the training, certification, and qualification requirements. To improve the management and oversight of FPS’s contract guard program, we recommend that the Secretary of Homeland Security direct the Under Secretary of NPPD and the Director of FPS to take the following three actions: take immediate steps to determine which guards have not had screener or active-shooter scenario training and provide it to them and, as part of developing a national lesson plan, decide how and how often these trainings will be provided in the future require that contract guard companies’ instructors be certified to teach basic and refresher training courses to guards and evaluate whether a standardized instructor certification process should be implemented; and develop and implement procedures for monthly guard-file reviews to ensure consistency in selecting files and verifying the results. We provided a draft of this report to DHS for review and comment. DHS concurred with our recommendations and provided written comments that are reprinted in appendix II. DHS also provided technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to appropriate congressional committees, the Secretary of Homeland Security, and other interested parties. In addition, the report will be available at no charge on GAO’s web site at http//www.gao.gov. If you have any questions about this report, please contact me at (202) 512-2834 or goldsteinm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. In addition to the contact name above, Tammy Conquest, Assistant Director; Antoine Clark; Colin Fallon; Kathleen Gilhooly; Katherine Hamer; Amanda Miller; Ramon Rodriguez; William Woods; and Gwyneth Woolwine made key contributions to this report. | FPS relies on a privately contracted guard force (about 13,500 guards) to provide security to federal facilities under the custody and control of the General Services Administration. In 2010 and 2012, GAO reported that FPS faced challenges overseeing its contract guard program, specifically in ensuring guards' qualifications. GAO was asked to update the status of FPS's contract guard oversight. This report examines (1) how FPS's requirements for contract guards compare to those of selected federal agencies and challenges, if any, that FPS faces in ensuring its requirements are met; (2) the extent to which guard companies have documented compliance with FPS's guard requirements; and (3) the management controls FPS uses to ensure compliance with its guard requirements. GAO reviewed 31 FPS guard contracts, and analyzed guard files from 11 contracts, selected based on geographic diversity; interviewed officials from guard companies, FPS headquarters, and 4 of 11 FPS regions; and reviewed the contract guard requirements and processes at six federal agencies, selected for their comparability to FPS. Several of the Department of Homeland Security's (DHS) Federal Protective Service's (FPS) guard requirements are generally comparable to those of the six selected agencies GAO reviewed, but FPS faces challenges in some aspects of guards' training. FPS and the six selected agencies GAO reviewed require basic, firearms, and screener (x-ray and magnetometer equipment) training for their armed guards. However, GAO found that providing screener training remains a challenge for FPS. For example, officials from one of FPS's contract guard companies stated that 133 (about 38 percent) of its approximately 350 guards have never received this training. Similarly, according to officials at five guard companies, some of their contract guards have not received training on how to respond during incidents involving an active shooter. Additionally, while contract guard industry guidance states that all training should be done with a certified instructor, GAO found that FPS does not require guard instructors to be certified to provide basic and refresher training, which represents the majority of guards' training. According to six guard companies, the lack of a requirement has led to having to retrain some guards, potentially increasing costs to FPS. Twenty-three percent of contract guard files GAO reviewed did not have required training and certification documentation. GAO reviewed 276 randomly selected (non-generalizable) guard files maintained by 11 of the 31 guard companies GAO interviewed and found that 212 files (77 percent) contained the required training and certification documentation, but 64 files (23 percent) were missing one or more required documents. For example, the 64 files were missing items such as documentation of initial weapons and screener training and firearms qualifications. Although FPS has taken steps to address its challenges in this area, GAO's previous recommendations concerning monitoring guard companies' performance are a guide to furthering FPS's efforts. According to FPS officials, it plans to address GAO's recommendations in the near future. FPS continues to lack effective management controls to ensure its guards have met its training and certification requirements. For instance, although FPS agreed with GAO's 2010 and 2012 recommendations that it develop a comprehensive and reliable system for managing information on guards' training, certifications, and qualifications, it still does not have such a system. According to FPS officials, it plans to address this recommendation in the near future. FPS also lacks sufficient management controls to ensure consistency in its monthly guard file review process (its primary management control for ensuring that guards are trained and certified), raising questions about the utility of this process. In the absence of specific guidance regarding how files are to be selected, FPS's 11 regions varied in how they conducted the monthly file reviews. For example, FPS officials from three regions stated that they randomly select their files for review, while officials from one guard company in another region stated that FPS asks the guard company to select the files for review. Allowing contract guard company officials to select files for review by FPS could result in selection bias and affect the results of FPS's review. FPS also lacks guidance on reviewing and verifying the results of its guard-file reviews. Without such guidance, FPS may not be able to determine the accuracy of its monthly file review results or if its contract guard companies are complying with the guard training and certification requirements. GAO recommends that the Secretary of DHS direct FPS to take immediate steps to determine which guards have not had screener or active-shooter scenario training and provide it to them; require that guard instructors be certified to teach basic and refresher training; and develop and implement guidance for selecting guard files and verifying the results. DHS concurred with GAOs recommendations. |
Foreign military sales are made on a case by case basis. The cases are initiated by a foreign country sending a letter of request to DOD asking for various information, such as precise price data. After the country obtains and reviews this information and decides that it wants to do business with the U.S. government, DOD prepares a Letter of Offer and Acceptance (LOA) stating the terms of the sale for the goods and services being provided. If accepted by the country, the LOA becomes the formal sales agreement by which the U.S. government contracts with the country to sell it defense articles or services. Once the LOA is accepted, the foreign country is generally required to pay, in advance, amounts necessary to cover costs associated with the sales agreement. DOD then uses these funds, held in trust by the Department of the Treasury, to pay private contractors and to reimburse DOD activities for the cost of executing and administering the FMS agreement. As payments are made, the military services report detailed disbursing and accounting data to a central activity—the Defense Finance and Accounting Service, Denver Center—which maintains the records of each country’s trust fund balance and issues quarterly statements to foreign customers summarizing amounts charged to their cases. In October 1991, DOD established DBOF, which consolidated into one revolving fund, nine existing industrial and stock funds that had operated within DOD for about 45 years, as well as the Defense Finance and Accounting Service, Defense Industrial Plant Equipment Service, Defense Commissary Agency, Defense Reutilization and Marketing Service, and Defense Technical Information Service. In establishing DBOF, one of DOD’s primary goals was to identify the total cost of operations and to highlight the cost implications of management decisions. DOD’s Financial Management Regulation 7000.14-R, Volumes 11B and 15 prescribe the financial management requirements, systems, and functions that WCF activities are to follow when establishing prices and billing FMS customers.Generally, billings to these customers shall reimburse the WCF for the full cost incurred by the U.S. government for providing the goods or services. According to the regulation, full cost is determined by the application of the stabilized rates or unit prices which are set to achieve a break-even operating result in the budget year—that is, neither to make a profit nor incur a loss. Since the concept of DBOF was first put forth in February 1991, we have monitored and evaluated its implementation and operation. We have issued numerous reports discussing various problems with fragmented cost accounting systems and inaccurate financial reporting. More specifically, one problem we found was that not all costs were being captured in the price-setting process, thus, resulting in less than full cost recovery. However, in our May 1997 testimony before the Subcommittee on Defense, Senate Committee on Appropriations, we noted that DOD has progressed significantly in identifying the cost of doing business and including those costs in the prices DBOF charged its customers. To determine regulatory requirements for billing FMS customers using stabilized rates and prices, we obtained and analyzed laws, policies, procedures, regulations, and guidance from DOD, Army, Navy, and Air Force officials. During our visits to DOD locations, we gathered and analyzed budget and accounting reports to identify cost elements in the prices of goods and services sold to FMS customers. We compared these cost elements with other cost data in various databases and met with responsible agency officials to discuss and clarify any differences in (1) cost elements used for FMS and DOD customers and (2) the amounts charged. To determine the amount of civilian pension and postretirement health benefit costs that should have been collected from FMS customers by WCF supply activities, we obtained and analyzed financial reports that showed sales and expense data for Army, Navy, Air Force, and Defense Logistics Agency supply activities for fiscal years 1992 through 1996. Because these activities generally did not maintain data to identify how much time personnel spent providing services to FMS customers, we estimated the amounts of civilian pension and postretirement health benefit costs related to FMS using certain assumptions. To do this, we first calculated the dollar value of FMS sales as a percentage of total dollar sales for each of the activities for each fiscal year. For example, if a supply activity showed that its annual sales were $1 billion of which $100 million were to FMS customers, we calculated sales to FMS customers to be 10 percent ($100 million divided by $1 billion). To calculate the pension benefit costs, we multiplied the percent of each year’s FMS sales by the total amount of civilian personnel salaries reported as paid during the year to determine a pro rata dollar amount for FMS civilian personnel salaries. Finally, to determine the estimated amount of civilian pension benefit costs to be collected from FMS customers, we multiplied the pro rata dollar amount of FMS personnel salaries times the civilian pension benefit cost factor of 14.7 percent for each activity for fiscal years 1992 through 1996. According to the Office of Management and Budget (OMB) and DOD officials, the 14.7 percent rate represents the “unfunded” portion of the pension benefit cost which is derived by subtracting DOD’s 7 percent contribution to the pension costs of its employees (21.7 percent less 7 percent). The 7 percent DOD contribution is already included in the stabilized rate as a funded fringe benefit cost. To determine the amount of civilian postretirement health benefit cost, we multiplied the percentage of FMS sales to total sales times the civilian end strength for each supply activity for fiscal years 1992 through 1996. For example, if the pro rata amount of FMS sales to total sales was 10 percent for fiscal year 1996 and an activity reported civilian end strength at 5,000 employees for the same period, our calculated FMS civilian end strength would be 500 full time employees involved with FMS activities (10 percent times 5,000 employees). Using these numbers, we multiplied the pro rata amount by $2,166 which was the Office of Personnel Management (OPM) calculated amount of average postretirement health benefit cost per employee for fiscal year 1996. To determine the postretirement health benefit cost per employee for fiscal year 1995 and earlier, we contacted officials in OPM’s Office of Actuaries, including the Deputy Director of the Office of Actuaries. According to the OPM officials, prior to fiscal year 1996, OPM had not published any formal amounts for agencies to use in calculating pension or postretirement health benefit costs. However, OPM officials told us that postretirement health benefit costs have increased by about 7 percent each fiscal year. Therefore, according to OPM officials, we could determine the fiscal year 1995 postretirement health benefit cost by dividing the fiscal year 1996 cost of $2,166 by 107 percent. Fiscal year 1994 could then be determined by dividing the fiscal year 1995 postretirement health benefit cost by 107 percent and so on for each preceding fiscal year. The OPM officials generally agreed with our methodologies for calculating estimated pension and postretirement health benefit costs. We did not calculate pension benefit cost for nonsupply activities because the nonsupply activities were generally including these costs in their prices for FMS customers. They did not, however, include the postretirement health benefit cost in their prices. Since they were recovering the largest segment of the retirement benefit cost, we did not attempt to estimate undercharges for postretirement health benefit cost for the nonsupply activities. To do so would have required us to analyze numerous detailed accounting and budget reports of over 100 additional WCF activities. Over the years, both we and the DOD Inspector General have reported that the DOD’s financial systems used to collect and report data are not capable of producing accurate and reliable information. Our estimates were based on financial information provided by DOD which we did not independently verify. We performed our work at the headquarters, Departments of the Army, Navy, Air Force; Defense Security Assistance Agency; and Office of the Under Secretary of Defense (Comptroller) in Washington, D.C. We also performed audit work at the Army Materiel Command, Alexandria, Virginia; Air Force Materiel Command, Wright Patterson Air Force Base, Dayton, Ohio; Naval Inventory Control Point, Mechanicsburg, Pennsylvania; Naval Air Warfare Center, Patuxent River, Maryland; Naval Surface Warfare Center, Indian Head, Maryland; Defense Logistics Agency, Fort Belvoir, Virginia; and Letterkenney Army Depot, Chambersburg, Pennsylvania. We conducted our review from November 1996 through July 1997 in accordance with generally accepted government auditing standards. We requested written comments on a draft of the report from the Secretary of Defense or his designee. The Acting Under Secretary of Defense (Comptroller) provided written comments, which are discussed in the “Agency Comments” section and reprinted in appendix I. The concept of a stabilized rate is a viable method to use for pricing goods and services sold to FMS customers. If this rate is applied consistently and contains all known cost elements, it should recover the full cost of operations over the long term. In analyzing the cost elements in the stabilized rate, we identified additional elements—pension and postretirement health benefit costs which are part of the civilian labor costs—that should have been included in developing the stabilized rate and charged to FMS customers. Omission of these costs resulted in estimated underbillings of more than $40.5 million since fiscal year 1992. Present DOD policy requires the WCF activities to establish prices that allow them to recover from their customers the expected costs, including any prior years’ losses. WCF activities are to establish prices prior to the start of each fiscal year and apply these predetermined (stabilized or standard) prices to most orders and requisitions received during the year. Because sales prices are based on expected costs and workload, higher-than-expected costs or lower-than-expected customer demand for goods and services can cause the WCF activities to incur losses. Conversely, lower-than-expected costs or higher-than-expected customer demand for goods and services can result in profits. The process for establishing stabilized prices for WCFs generally begins about 2 years before the prices go into effect, with managers from each WCF developing workload projections for the budget year. After WCF managers estimate their workloads based on customer input, they (1) use productivity projections to estimate how many people they will need to accomplish the work, (2) prepare a budget that identifies the labor, material, and other expected costs, and (3) develop prices that, when applied to the projected workload, should allow them to recover operating costs from their customers. Not all cost elements are applicable to all WCF activities. For example, the cost element of inventory losses/obsolescence generally applies only to WCF supply activities that maintain inventories. Below is a list of major cost elements used to develop stabilized rates: direct and indirect labor, direct material, general and administrative expenses, inventory losses/obsolescence, inventory maintenance, condemnation of inventory items, accumulated operating results gains or losses, depreciation, and joint logistics systems center (JLSC) surcharge. Major commands responsible for the overall management of the WCFs review the budget estimates and consolidate individual business area activities’ budget estimates. The military services’ and DOD components’ headquarters and the Office of the Secretary of Defense also review the budget estimates before they are submitted to the Congress as part of the annual budget. Any changes made during the DOD budget review process are incorporated into the WCFs’ prices before the beginning of the fiscal year. With the exception of retirement benefit costs for civilian employees, which is discussed below, we found that all of the key cost elements to recover full cost from FMS customers are now included in the stabilized price. The costs not charged by the WCF supply activities, which were responsible for about $1.5 billion (75 percent) of the WCFs annual sales to FMS customers, consisted of a portion of the government’s share of the full cost for pension and postretirement health benefit costs for civilian personnel who worked on FMS cases. The employee and the employing agency both contribute annually toward the cost of the future pension benefits. While the contributions made by DOD are now part of the stabilized rate, the employee and agency contributions are less than the full cost of providing the pension benefits. Therefore, the federal government must, in effect, make up the funding shortfall. In addition, neither the agency nor the employee pays the federal government’s portion of postretirement health benefit costs. Both the pension and postretirement health benefit costs will eventually be paid out of the general funds in the Treasury—not by DOD. Since the pension and postretirement health benefits are costs to the government, they should be added to the stabilized rate and recovered from FMS customers. In this regard, we found that the nonsupply activities we visited recognized this and modified the stabilized rate to include the full pension costs in the prices they charged FMS customers. However, they did not include the postretirement health benefit cost. As noted earlier, we did not attempt to estimate the postretirement health benefit cost for nonsupply activities. Including retirement benefit costs is consistent with the Statement of Federal Financial Accounting Standards Number 4, which states that federal agencies should measure and report direct and indirect costs that contribute to output, regardless of funding sources. It is also consistent with OMB Circular No. A-25, which established the guidelines for federal agencies to assess fees for government services. The guidance notes that user charges will be sufficient to recover the full cost to the federal government of providing the service, resource, or goods. The circular points out that “full cost” is to include all direct and indirect costs to any part of the federal government of providing a good, resource, or service. Under the circular, these costs include, but are not limited to, an appropriate share of direct and indirect personnel costs, such as accrued retirement cost not covered by employee contributions. Because WCF supply activities did not maintain data to identify the time personnel spent providing services to FMS customers, our estimates for civilian pension and postretirement health benefit costs were calculated based on assumptions discussed in our scope and methodology. Table 1 shows the results of our calculations for each of the WCF supply activities for fiscal years 1992 through 1996. In discussing this matter with DOD Comptroller officials, they acknowledged that civilian retirement benefits were a cost to the government which should be included in the stabilized rate and charged to FMS customers. They told us they are planning to revise their policy so that this cost will be included in the prices charged FMS customers beginning no later than fiscal year 1998. With regard to the $40.5 million of undercharges shown in table 1 and any additional undercharges that were made during fiscal year 1997, DOD policy requires that all proper charges be recorded against the applicable FMS case. According to the policy, case closure does not stop the billing process. Further, the standard FMS sales contract provides that the FMS customer is to pay the U. S. government the total cost of the items even if that cost exceeds the amounts estimated in the LOA. Also, we have issued numerous reports over the years that have (1) identified tens of millions of dollars of undercharges related to the costs for goods and services provided to FMS customers and (2) recommended that DOD retroactively collect the underbillings. Generally, DOD agreed with our earlier findings and recommendations and has rebilled and collected undercharges in the past. Therefore, since DOD policy and the contractual terms provide for adjustments to an FMS case, even if it has been closed, and DOD has collected undercharges in the past, DOD should make every reasonable attempt to recover the past undercharges for civilian pension and postretirement health benefit costs. In this regard, DOD should first consider the cost effectiveness of determining how much each FMS customer was undercharged. DOD’s stabilized rate policy, if applied properly, should allow WCF activities to recover the full cost of their operations over the long term. However, the stabilized rate should be adjusted to include all pension and postretirement health benefit costs to the U.S. government for items sold or services provided to FMS customers. DOD recognizes that these additional retirement benefit costs, whose omission has resulted in millions of dollars of undercharges, should be charged to FMS customers, and is in the process of revising its policy to require that these costs be included in future rates. We recommend that the Secretary of Defense direct the Under Secretary of Defense (Comptroller) to implement the stabilized rate policies and procedures as soon as possible to require WCF activities to include pension and postretirement health benefit costs in the prices they charge FMS customers, and make every reasonable attempt to bill for and collect the undercharges for pension and postretirement health benefit costs identified in this report. Such action should be taken only if cost effective to do so. DOD concurred with our findings and recommendations. The Acting Under Secretary of Defense (Comptroller) agreed that DOD should have been charging FMS customers for civilian retirement and postretirement health benefits and issued guidance on August 27, 1997, instructing that these charges be added to DOD’s prices effective immediately. The Acting Under Secretary also requested that DSAA and the military services review FMS cases, going back through fiscal year 1992, and bill the FMS customers for the costs of civilian retirement and postretirement health benefits where cost effective. We are sending copies of this report to the Chairmen and Ranking Minority Members of the Senate Committee on Armed Services, the Senate Committee on Governmental Affairs, the House Committee on National Security, the House Committee on Government Reform and Oversight, and the House and Senate Committees on Appropriations; the Secretary of Defense; the Director of the Office of Management and Budget; and other interested parties. We will make copies available to others upon request. Please contact me at (202) 512-6240 if you or your staff have any questions concerning this report. Other major contributors to this report are listed in appendix II. Navy Ordnance: Analysis of Business Area Price Increases and Financial Losses (GAO/AIMD/NSIAD-97-74, March 14, 1997). Defense Business Operations Fund: Management Issues Challenge Fund Implementation (GAO/AIMD-95-79, March 1, 1995). Defense Budget: Capital Asset Projects Undergo Significant Change Between Approval and Execution (GAO/NSIAD-95-20, December 28, 1994). Letter to the Principal Deputy Comptroller on the proposed DBOF 1307 Management Report (GAO/AIMD-94-159R, July 26, 1994). Defense Business Operations Fund: Improved Pricing Practices and Financial Reports Are Needed to Set Accurate Prices (GAO/AIMD-94-132, June 22, 1994). Financial Management: DOD’s Efforts to Improve Operations of the Defense Business Operations Fund (GAO/T-AIMD/NSIAD-94-170, April 28, 1994). Defense Management Initiatives: Limited Progress in Implementing Management Improvement Initiatives (GAO/T-AIMD-94-105, April 14, 1994). Financial Management: DOD’s Efforts to Improve Operations of the Defense Business Operations Fund (GAO/T-AIMD/NSIAD-94-146, March 25, 1994). Financial Management: Status of the Defense Business Operations Fund (GAO/AIMD-94-80, March 9, 1994). Letter to the Deputy Secretary of Defense on the results of the DOD-wide review and suggestions for improving the implementation of DBOF (GAO/AIMD-94-7R, October 12, 1993). Financial Management: Opportunities to Strengthen Management of the Defense Business Operations Fund (GAO/T-AFMD-93-6, June 16, 1993). Financial Management: Opportunities to Strengthen Management of the Defense Business Operations Fund (GAO/T-AFMD-93-4, May 13, 1993). Letter to Congressional Committees on DOD’s progress in implementing DBOF and GAO suggestions for improvement (GAO/AFMD-93-52R, March 1, 1993). Financial Management: Status of the Defense Business Operations Fund (GAO/AFMD-92-79, June 15, 1992). Financial Management: Defense Business Operations Fund Implementation Status (GAO/T-AFMD-92-8, April 30, 1992). Defense’s Planned Implementation of the $77 Billion Defense Business Operations Fund (GAO/T-AFMD-91-5, April 30, 1991). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the Department of Defense's (DOD) use of stabilized rates for charging foreign military sales (FMS) customers for goods and services sold through DOD's Defense Business Operations Fund (DBOF), focusing on whether: (1) there is a dollar difference in pricing goods and services at full cost compared to the stabilized rate; and (2) DOD's current practice of billing foreign customers at the stabilized rate is consistent with the full cost requirements of the Arms Export Control Act of 1976. GAO noted that: (1) DOD's stabilized rate generally is designed to recover full costs from DOD and FMS customers over the long term; (2) the concept of applying the stabilized rate is a viable method to recover the cost of goods and services from these customers; (3) GAO's analysis of cost elements in the stabilized rates showed that generally, the stabilized rate included the cost elements necessary to recover full cost; (4) however, GAO did identify two cost elements--pension and postretirement health benefits--related to retirement benefit costs of civilian personnel working on FMS cases, that were not included in the stabilized rates; (5) GAO estimates that Working Capital Fund (WCF) supply activities undercharged FMS customers at least $40.5 million during fiscal years (FY) 1992 through 1996 and will undercharge millions more in FY 1997; (5) GAO discussed this matter with DOD officials and they agreed that not all civilian retirement benefit labor costs were included in the rates that activities were charging FMS customers; and (6) they now plan to revise their policy to require that this cost be included in the prices charged FMS customers. |
OSHA administers the Occupational Safety and Health Act of 1970 (OSH Act), which was enacted to assure so far as possible safe and healthful working conditions for the nation’s workers. OSHA helps ensure the safety and health of 106 million private sector workers at approximately 8.7 million worksites in the United States by operating over 80 area offices that report to 1 of 10 regional offices. OSHA sets occupational safety and health standards and is responsible for enforcing them. The agency directly enforces these standards in about half the states; the remaining states have been granted authority by OSHA to set and enforce their own workplace safety and health standards under a state plan approved by OSHA. The OSH Act and OSHA’s regulations generally require employers to prepare and maintain records of work-related injuries and illnesses sustained by their workers and make them available to OSHA upon request. These requirements are referred to as OSHA’s recordkeeping requirements. OSHA has established definitions and guidelines to assist employers in determining which injuries and illnesses must be recorded. Employers are required to maintain a log of recordable injuries and illnesses incurred at each worksite. OSHA requires employers to post summaries of these injury and illness logs annually at each worksite and provide them to OSHA if requested. In addition, under a section of the OSH Act referred to as the whistleblower protection provision, employers are prohibited from retaliating against employees for taking certain protected actions, including reporting work-related injuries or illnesses, and OSHA is responsible for investigating workers’ complaints of retaliation. To help ensure compliance with federal occupational safety and health standards and OSHA’s recordkeeping requirements, OSHA conducts enforcement activities such as on-site inspections of worksites. OSHA conducts these inspections in response to fatalities, serious injuries, complaints from workers, and referrals. In addition, OSHA targets industries and employers with a high number of workplace injuries and illnesses for inspection. When inspecting worksites, OSHA inspectors identify hazards that could lead to workers’ injuries or illnesses, review worksites’ injury and illness records, evaluate employers’ safety and health management systems, and meet with employers and worker representatives to discuss their findings and possible courses of action to correct hazards and improve their systems. Employers that fail to comply with the safety and health standards may face sanctions, such as paying penalties for violations. In its field operations manual, OSHA provides guidance to inspectors, employers, and workers on compliance with safety and health standards, inspections, and penalty assessments. To help employers comply with safety and health standards and recordkeeping requirements, OSHA supplements its enforcement efforts with voluntary cooperative programs, outreach, and training in which OSHA invites employers to collaborate with the agency and uses a variety of methods to encourage employers to adopt practices designed to foster safer and healthier working conditions. For example, OSHA’s Voluntary Protection Programs (VPP) recognize employers with exemplary safety and health systems and relatively low injury and illness rates, and exempts them from routine inspections. Small employers that request on-site consultation services may be recognized through OSHA’s Safety & Health Achievement Recognition Program (SHARP), which exempts those with exemplary safety and health management systems from routine inspections for up to 3 years. OSHA also trains employers and workers on how to comply with its standards and other regulations by, for example, providing online materials and reaching out directly to employer and worker groups. For example, each OSHA area office typically has one outreach specialist who serves as a resource to a variety of groups including businesses, trade associations, unions, and community groups.on OSHA’s cooperative programs, training resources, and tools available on the agency’s website. In addition, during inspections, OSHA’s inspectors provide information to employers on the strengths and weaknesses of their safety and health management systems. OSHA encourages employers to take a multifaceted approach to preventing and controlling hazards and creating an effective safety and health management system or a positive safety culture.OSHA, the four elements of an effective safety and health management system are as follows: (1) Management commitment and employee involvement Employers should develop a safety and health policy, communicate it to all employees, and demonstrate commitment to it by, for example, instilling accountability for safety and health and ensuring an open exchange of information about safety issues. Employees should be involved in safety- and health-related activities such as accident investigations. (2) Worksite analysis Employers should have a thorough understanding of all hazardous situations to which employees may be exposed, as well as the ability to recognize and correct these hazards. Accurate injury and illness records can be used to identify and prevent work-related injuries and illnesses. (3) Hazard prevention and control Employers should have clear procedures for preventing and controlling hazards identified through worksite analysis, such as a hazard tracking system and a written system for monitoring and maintaining workplace equipment. (4) Safety and health training Training is necessary to reinforce and complement management’s commitment to safety and health and to ensure that all employees understand how to avoid exposure to hazards. As part of their safety and health management systems, many employers use safety incentive programs to encourage safety in the workplace. These programs provide workers with rewards for achieving certain safety goals. Examples of these rewards include cash, meals, tangible goods, and public recognition. Employers can provide such rewards on the basis of individual or group performance depending on the program’s design. There are two types of safety incentive programs: rate-based programs, which reward workers for achieving low rates of reported injuries or illnesses, and behavior-based programs, which reward workers for certain behaviors such as recommending safety improvements (see fig. 1). Rate- based programs provide workers or groups of workers with rewards such as bonuses and prizes for having no or a low number of work-related injuries and illnesses during a specified period. For example, an employer’s rate-based program may reward workers with $100 bonuses for having no reported work-related injuries or illnesses in a given year. Behavior-based programs provide workers or groups of workers with rewards for demonstrating safe behaviors but are not tied to low injury and illness rates. For example, an employer’s behavior-based program may reward workers with gift cards for identifying hazardous conditions and suggesting safety improvements. Some experts we interviewed used the term behavior- based safety programs to describe an approach to workplace safety that focuses on worker behavior as the cause of work-related injuries and illnesses. However, in this report, we use the term behavior-based program to define a type of safety incentive program that is a component of an employer's safety and health management system. These systems may include other workplace safety policies such as demerit systems that discipline workers for failing to follow safety procedures. Employers’ safety and health management systems often include other workplace safety policies. For example, some employers require the participation of frontline workers and management in safety committees to help foster communication and address safety-related issues and encourage workers to promptly report injuries or illnesses and address safety hazards. Other workplace safety policies are designed to prevent injuries and illnesses by holding workers accountable for using safe work practices. Demerit systems discipline workers for unsafe work practices such as failing to follow safety procedures. For example, some employers have policies that discipline workers for not wearing protective gear or for other unsafe practices linked to reported injuries. In addition, some employers have drug and alcohol testing policies, which provide for the testing of workers (1) prior to employment, (2) at random intervals for some or all workers, (3) at scheduled times for all workers, (4) when there is evidence that suggests a worker may have used drugs or alcohol, or (5) after a workplace incident, such as an injury, occurs. Little conclusive academic research exists on whether safety incentive programs and other workplace safety policies affect workers’ injury and illness reporting, but several experts stated that rate-based programs may discourage injury and illness reporting. Of the 26 studies of workplace safety we reviewed, we identified 6 that evaluated the effect of safety incentive programs on workplace safety, but only 2 of these studies specifically evaluated the programs’ effect on reporting of injuries. Each of the six studies, however, had methodological limitations that prevent generalizing the effects of these programs on injury and illness reporting for all workers. The six studies that evaluated safety incentive programs reached different conclusions about their effect on workplace safety. Three studies— including the two that specifically evaluated the programs’ effect on reporting of injuries—focused on one type of safety incentive program and found that their effect on workplace safety was inconclusive or that the programs had no effect. For example, one study in which nurses were surveyed to determine how often injuries and illnesses were reported in their workplaces found that rate-based safety incentive programs had no effect on injury reporting. reporting which may differ from actual reporting due to, for example, faulty memories, and thus its results are not definitive. The three studies that did not focus on only one type of safety incentive program found that the programs reduced injuries; however, these studies did not quantify the programs’ effect on injury and illness reporting. The authors of these studies acknowledged that, when the programs provide incentives for not reporting an injury—such as providing a monetary reward for having a low injury and illness rate—workers may underreport injuries. For example, the authors of one study noted that workers may “intentionally fail to report injuries in an effort to preserve potential bonuses for their work groups.” Information on the six studies is summarized in table 1. Jean Geiger Brown, Alison Trinkoff, Kenneth Rempher, Kathleen McPhaul, Barbara Brady, Jane Lipscomb, and Charles Muntaner, “Nurses Inclination to Report Work-Related Injuries: Organizational, Work-Group, and Individual Factors Associated with Reporting,” AAOHN Journal, vol. 53, no. 5 (2005): 213-217. The study analyzed workers’ perceptions of reporting behavior which may differ from actual reporting behavior; therefore, the results are not definitive. In addition to reviewing existing studies, we interviewed over 50 experts and industry officials from academia, employer associations, a law firm, a consulting firm, unions, and state and federal safety and health agencies to obtain their opinions about the effect of safety incentive programs and other workplace safety policies on injury and illness reporting. Several of them told us that an unintended consequence of rate-based programs may be discouraging workers from reporting injuries and illnesses. For example, when workers’ injuries are relatively minor or easy to hide, and if the rewards provided under the program are relatively large, workers may not report their injuries to preserve their rewards. Potential underreporting of injuries and illnesses is even greater when an incentive creates peer pressure on workers to not report injuries. For example, when all workers on a team get a reward only if no one on the team has an injury, there may be pressure on all members of the team to not report injuries. According to some experts we interviewed, it is difficult to quantify the effect safety incentive programs may have on injury and illness reporting partly because researchers do not have access to workers’ medical records. Without such access, workers who do not report their injuries cannot be identified and this information cannot be used to explore whether workers’ decisions to not report their injuries were linked to their employers’ safety incentive programs. Several experts and industry officials we interviewed also mentioned that, along with safety incentive programs, some workplace safety policies may discourage workers from reporting injuries and illnesses. For example, policies that punish workers for unsafe practices that are linked to injuries may—depending on the nature of the injury and the policy—inhibit them from reporting injuries. Such policies include demerit systems that have consequences for workers who report injuries or illnesses, such as giving workers warnings, demotions, or terminating them for recurrences. However, some employers use demerit systems to discipline workers who engage in unsafe practices such as not wearing protective gear, and such demerit systems may have no effect on workers’ reporting of injuries and illnesses. According to officials from a union, workplace safety policies that single out workers who report injuries or illnesses by, for example, requiring them to wear identifying clothes such as an orange vest, may also discourage them from reporting. In addition, according to several experts, policies that require drug and alcohol testing after an injury is reported—compared to those that are applied on a routine basis to all workers—may deter workers from reporting injuries. We found only one study that evaluated the effect of these other workplace safety policies mentioned by experts and industry officials as having a potentially adverse effect on injury and illness reporting. This study evaluated the effect of post-incident drug testing on injury and illness reporting and found evidence that such testing may discourage reporting of relatively minor injuries that are easy to hide. While some safety incentive programs and other workplace safety policies may discourage injury and illness reporting, research we reviewed indicated that how employers manage safety has a greater influence on workers’ actions, including whether they are likely to report injuries and illnesses, than any one program or policy. Among the 26 studies we reviewed, most found that employers that promote a positive safety culture may encourage workers to use safe behaviors, report injuries and illnesses, or reduce the incidence of injuries and illnesses. We identified 21 studies that evaluated the effect of an employer’s safety culture on workplace safety. Of these studies, 16 indicated that having a good safety culture has a positive effect on workers’ use of safe behaviors, injury and illness rates, or reporting of injuries and illnesses, and 5 indicated that a good safety culture had a mixed or inconclusive effect. According to the studies we reviewed, workplaces with a positive safety culture placed a strong emphasis on safety by, for example, encouraging open communication about safety issues, placing a high priority on safety training, and having procedures that prevented breakdowns in workplace safety. Some researchers concluded that in such environments, workers felt that they could report injuries and illnesses without fear of reprisal or blame from management or fellow workers. Of the four studies we reviewed that evaluated the effect of a positive safety culture on reporting of work-related injuries or accidents, safety culture increased the likelihood of injury and illness reporting. three found that having a positive Policies that help employers create a positive safety culture and keep workers safe and healthy were generally perceived as being proactive versus reactive. For example, employers with proactive policies that require workers to report near-miss incidents to help identify hazards and other safety concerns before an injury takes place were more likely to have a positive effect on injury and illness reporting. Each of the studies had a methodological issue that may limit the generalizability of the findings. For example, three of the four studies included nonrandom samples and the results may be affected by selection bias. In contrast, according to the studies we reviewed, workplaces with a negative safety culture do not place a strong emphasis on safety. These employers do not encourage open communication about safety issues or prioritize safety training. According to two experts we interviewed, some employer safety programs focus on workers' behaviors as the cause of work-related injuries and illnesses, and have policies that discipline workers for failing to follow safety procedures. As a result, workers in these environments may be less likely to report injuries or illnesses because, if they lack safety training, communication is poor, or they are not encouraged to report injuries and illness, they may not know how to report them, or may fear being disciplined. According to our survey, in 2010, an estimated 116,000 of about 153,000 manufacturers in the United States (75 percent) had safety incentive programs or had other workplace safety policies that, according to several experts, may affect workers’ reporting of injuries and illnesses. However, we estimated that safety incentive programs were less prevalent than other workplace safety policies, such as demerit systems, that discipline workers for unsafe work practices.a quarter of manufacturers had some type of safety incentive program and most had a demerit system or post-incident drug and alcohol testing We also estimated that policy. Demerit systems were the most common policy reported, followed by post-incident drug and alcohol testing policies (see fig. 2). Very few manufacturers had only one type of safety incentive program, and few had only one type of other workplace safety policy. Most manufacturers had more than one safety incentive program or other workplace safety policy, and more than 20 percent had several, according to our estimates. For example, one manufacturer who participated in our survey had a program that rewarded workers with a luncheon for having no injuries that resulted in lost time on the job, and provided a separate reward to the worker who submitted the best safety suggestion during the month. Manufacturers with multiple types of programs or policies were more than twice as likely to have a demerit system or conduct post-incident drug and alcohol testing than they were to have a rate-based or behavior-based program (see fig. 3). Large manufacturers were more likely to have safety incentive programs and demerit systems than smaller manufacturers. We estimated that large manufacturers were more than three times as likely to have safety incentive programs compared with small manufacturers. Although safety incentive programs and other workplace safety policies were less common among small manufacturers, most small manufacturers had demerit systems and many had post-incident drug and alcohol testing policies (see fig. 4). Companies sometimes request information on manufacturers’ injury and illness rates before signing a contract with them to manufacture goods. According to some workplace safety experts, such contractors may feel pressure to lower injury and illness rates to avoid the risk of losing bids for contracted work. Manufacturers whose injury and illness rates were requested by potential contracting companies were more than twice as likely to have rate-based safety incentive programs than manufacturers whose rates were not requested. We estimated that 31 percent of U.S. manufacturers performed contractual work in 2010. Contracting companies requested injury and illness rate data from nearly a third of these manufacturers prior to signing a contract with them. Thirty-eight percent of these manufacturers that had their injury and illness rates requested reported having rate-based programs in 2010. In contrast, 13 percent of the manufacturers that had did not have their injury and illness data requested by potential contracting companies prior to signing a contract reported having rate-based programs in 2010. U.S. manufacturers provided incentives to workers for a variety of safety goals and behaviors. Nearly three-quarters of manufacturers with rate- based programs, according to our estimates, rewarded workers for having no reported injuries and illnesses. Forty percent rewarded workers for having a low number or rate of injuries and illnesses during a specific time period, and 23 percent of them rewarded workers for reducing the number or rate of reported injuries and illnesses. Nearly 70 percent of manufacturers with behavior-based programs rewarded workers for recommending workplace safety improvements and 37 percent rewarded them for wearing protective gear. The criteria for providing rewards differed between rate-based and behavior- based programs, but the types of rewards manufacturers provided and the types of workers targeted by both of these safety incentive programs were similar. For both types of programs, monetary awards, meals, and other non- monetary awards, such as gift cards, were more commonly offered than time off work or a token of recognition, such as a plaque. Manufacturers used safety incentive programs to target various levels of workers and worker groups, including entire workplaces, work teams such as department or shifts, supervisors, and frontline workers. However, the percentage of manufacturers that rewarded individual frontline workers through either rate- based or behavior-based safety incentive programs was twice as high as those that rewarded supervisors. OSHA can use its enforcement authority to address certain aspects of safety incentive programs and other workplace safety policies, but the effectiveness of these activities is limited. Although the OSH Act does not mandate that OSHA regulate safety incentive programs, OSHA officials told us the agency could potentially issue a regulation to address safety incentive programs and other workplace safety policies. However, OSHA has not done so because, according to OSHA officials, it has focused its regulatory resources on other priorities such as projects that address exposure to serious safety and health hazards. Some of OSHA’s enforcement tools can be used to address certain aspects of safety incentive programs and other workplace safety policies, but these tools are not designed to systematically address these programs. For example, a worker may file a whistleblower protection complaint if the worker reports an injury and, under the rules of the employer’s safety incentive program, is subsequently excluded from receiving a reward, such as a bonus. However, such claims may only address the adverse action experienced by an individual worker and not address the potential overall negative impact a safety incentive program may have on the workplace. Under its recordkeeping regulations, OSHA can address recordkeeping violations that occur as a result of safety incentive programs and other workplace safety policies, but it cannot address potential disincentives to injury and illness reporting associated with the policies. For example, OSHA can cite employers for failing to properly record injuries or illnesses under its recordkeeping regulations, but the relationship between a safety incentive program and potential underreporting of injuries and illnesses is not directly addressed in these requirements. To find evidence of underreporting, inspectors must interview workers, review their medical records, and compare these records to employers’ injury and illness logs to determine whether an injury or illness occurred but was not reflected on the log. 29 U.S.C. §§ 657(c), 658(a). For OSHA’s recordkeeping regulations, see generally 29 C.F.R. part 1904. the accuracy of employers’ injury and illness logs and identify and correct any mistakes or omissions. OSHA began this program in September 2009, and in February 2010 established a goal of auditing injury and illness records at approximately 350 worksites nationwide over a 2-year period. Inspectors compared employers’ injury and illness logs to workers’ medical records, and interviewed workers, managers, recordkeepers, and first-aid providers. As part of these audits, OSHA directed inspectors to consider the effect of safety incentive programs or other workplace safety policies on injury and illness reporting, and when recordkeeping violations were found, in assessing the severity of the violation. For example, according to OSHA officials, if inspectors found underreporting of injuries and illnesses and concluded that a safety incentive program was a contributing factor, the inspector could classify the violation as willful, which carries an increased penalty. However, the guidance provided to inspectors did not specify how this assessment should be done and, in our interviews with OSHA area office officials we found that OSHA inspectors inconsistently considered safety incentive programs when reviewing employers’ injury and illness records. For example, one area office official said that the penalty assessment for a recordkeeping violation would be the same regardless of the existence of a safety incentive program. In addition, because OSHA did not select a nationally representative sample of worksites for these inspections, OSHA cannot use the results to determine the effect of safety incentive programs and other workplace safety policies on injury and illness reporting nationwide. OSHA has developed policy guidance on safety incentive programs for the VPP, but the guidance for its other cooperative programs and for its enforcement efforts does not address safety incentive programs or other workplace safety policies. For example, OSHA’s guidance on its SHARP program, a voluntary cooperative program that focuses on smaller employers, does not address safety incentive programs. Similarly, OSHA’s field operations manual does not provide guidance to its inspectors for addressing safety incentive programs during inspections. In June 2011, OSHA issued a policy memorandum for the VPP program that contains specific criteria for safety incentive programs, including the types of programs that are encouraged for VPP sites and those that are prohibited. Programs that promote accurate injury and illness reporting are encouraged, while participants in the VPP are now prohibited from having safety incentive programs that focus on the number of injuries and illnesses, such as rate-based programs that reward workers for achieving low injury and illness rates. This policy memorandum does not address other workplace safety policies that might impact injury and illness reporting. OSHA officials are required to ensure current VPP participants are in compliance with this policy when participants are reevaluated to determine whether they will be allowed to continue to participate in the program, but the new policy is not included in the VPP manual. Officials from one regional office estimated that almost 20 percent of its VPP participants have safety incentive programs that are not in compliance with this new policy. In addition to providing guidance on its voluntary cooperative programs, OSHA often provides safety information to employers during its on-site inspections. In its guidance on conducting inspections, OSHA’s field operations manual outlines the educational duties that inspectors have as part of the inspection process. For example, inspectors are expected to discuss the strengths and weaknesses of the employers’ safety and health management system and advise the employer of the benefits of effective systems during the closing conference of the inspection. However, the field operations manual does not make any references to safety incentive programs or other workplace safety policies. Other OSHA resources lack guidance about safety incentive programs and other workplace safety policies. Outreach specialists and materials available on OSHA’s website are additional sources of information that can educate employers and workers about how safety incentive programs and other workplace safety policies may affect a workplace’s safety and health management system. Although outreach specialists each develop materials and approaches for addressing the needs of employers in their particular geographic area, each has an opportunity to discuss the potential risks and benefits of safety incentive programs and the potential impact of workplace safety policies on injury and illness reporting during discussions about recordkeeping, safety and health management systems, and OSHA’s cooperative programs, among other topics. In addition, many resources are available to employers through OSHA’s website, including fact sheets about recordkeeping and best practices, such as the Effective Workplace Safety and Health Management Systems fact sheet. This fact sheet and several others do not discuss safety incentive programs or other workplace safety policies, although some do address aspects of a positive safety culture. Safety incentive programs exist in the context of a workplace’s safety culture. Some types of programs, particularly those that are tied to low injury and illness rates, may discourage injury or illness reporting. However, the same programs in workplaces with positive safety cultures may have no effect with regard to reporting. Similarly, some workplace safety policies, such as those that punish workers in some way for reporting injuries or illnesses, may discourage workers from reporting injuries and illnesses, especially when implemented in a workplace with a negative safety culture. Because OSHA relies heavily on accurate injury and illness reporting in tailoring its programs and allocating its finite enforcement resources, it is important for the agency to assess the impact of safety incentive programs and certain workplace safety policies on injury and illness reporting, particularly given their prevalence. Without accurate data, employers engaged in hazardous activities can avoid inspections and may be allowed to participate in voluntary programs that reward employers with exemplary safety and health management systems by exempting them from routine inspections. OSHA can encourage employers to create positive safety cultures and avoid safety incentive programs and workplace safety policies that may have a negative effect on injury and illness reporting. However, because safety incentive programs and certain workplace safety policies are not addressed in OSHA guidance, including its field operations manual, OSHA inspectors may not consider these programs and policies during worksite inspections, even as they observe key aspects of the workplace’s safety culture. As a result, inspectors may miss opportunities to educate employers about the benefits of promoting a positive safety culture and avoiding prevalent programs and policies that can discourage accurate reporting of injuries and illnesses. In addition, in the absence of consistent guidance on the potential benefits and risks of some safety incentive programs and workplace safety policies, OSHA may recognize some employers as having exemplary safety and health management systems without considering the potentially negative effects of some of their programs and policies. To increase consistency across OSHA’s cooperative programs, the Secretary of Labor should direct the Assistant Secretary of OSHA to implement criteria on safety incentive programs and other workplace safety policies across all of its cooperative programs such as VPP and SHARP. The criteria should be consistent with the most recent VPP guidance memorandum that prohibits employers with safety incentive programs that focus on injury and illness rates from participating in the program. To help OSHA inspectors consistently educate employers about the importance of safety culture, the Secretary of Labor should direct the Assistant Secretary of OSHA to add language about key elements of a positive safety culture—and the potential effect of different types of safety incentive programs and other workplace safety policies—to its field operations manual. We provided a draft of this report to Labor for review and comment. Labor’s Assistant Secretary for OSHA provided written comments, which are reproduced in appendix IV. OSHA agreed with our recommendations and emphasized the agency’s concern about workplace programs that appear to encourage safe work practices but actually discourage workers from reporting injuries. OSHA also provided technical comments, which we incorporated as appropriate. In response to our recommendation that OSHA implement criteria on safety incentive programs and other workplace safety policies across all of its cooperative programs such as VPP and SHARP, OSHA stated that it will provide policy guidance about safety incentive programs across the agency’s cooperative programs. According to OSHA, this guidance will be similar to the VPP policy prohibiting participants from using safety incentive programs that have the potential to discourage workers from reporting injuries. Establishing such criteria across all of its cooperative programs will help OSHA accurately recognize employers with exemplary safety and health management systems. In response to our recommendation that OSHA add language about key elements of a positive safety culture—and the potential effect of different types of safety incentive programs and other workplace safety policies— to its field operations manual, OSHA stated that it has issued guidance for its inspectors about safety incentive programs that underscores the agency’s position that programs that discourage workers from reporting injuries may violate whistleblower protection statutes and OSHA’s recordkeeping regulations. OSHA issued this guidance to regional and whistleblower program officials in March 2012 and published it on the agency’s website, but it has not yet been incorporated into the agency’s field operations manual. As agreed with your offices, unless you publically announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretary of Labor, relevant congressional committees, and other interested parties. In addition, the report will also be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have questions about this report, please contact me at (202) 512-7215 or moranr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. To determine what is known about the effect of workplace safety incentive programs and other workplace safety policies on injury and illness reporting, we conducted a literature search for relevant studies. We sought studies that analyzed the effect of workplace safety incentive programs; other workplace safety policies, such as post-incident drug testing; or safety culture on workers’ use of safe practices; injury and illness rates; or reporting of injuries and illnesses. To identify the studies, we searched bibliographic databases covering scientific, safety, medical, and economic literature, including ArticleFirst, CINAHL, EconLit, Electronic Collections Online, EMBASE, MEDLINE, ProQuest, PsycINFO, SciSearch, and Social SciSearch for relevant search terms and citations of studies. We limited the searches to materials published in 2001 or after. We performed these searches from August 2011 to October 2011, and identified over 600 abstracts of studies. Among these studies, we excluded those that did not satisfy our criteria that each study (1) be published in a peer-reviewed journal and (2) contain relevant, primary research conducted in the United States. We also excluded studies that seemed duplicative or did not meet GAO’s methodological standards. To assess the methodological quality of the studies, two GAO research methodologists independently reviewed each study that satisfied our criteria and excluded those that did not contain original research or lacked rigor. Using this approach, we identified 26 methodologically sound studies (see app. II for a list of the 26 studies). To supplement our understanding of what is known about the effect of safety incentive programs and other workplace safety policies on injury and illness reporting, we interviewed experts and industry officials from academia, employer associations, a law firm, a consulting firm, unions, and state and federal occupational safety and health agencies. We spoke with individuals from the University of Connecticut, Boston University, Institute for Work and Health, United Steel Workers, United Mine Workers of America, American Federation of Labor and Congress of Industrial Organizations, National Association of Manufacturers, Mercer, Voluntary Protection Programs Participants’ Association, Gibson, Dunn & Crutcher, Occupational Safety and Health Administration (OSHA), Bureau of Labor Statistics (BLS), National Institute for Occupational Safety and Health, Chemical Safety Board, and state occupational safety and health agencies in California, North Carolina, South Carolina, and Vermont. To identify these experts and industry officials, we reviewed relevant trade press and congressional transcripts and sought referrals from interviewees. To ensure balance, we spoke with an array of experts and industry officials with varying backgrounds and perspectives. To study the prevalence of workplace safety incentive programs as well as other workplace safety policies that may affect injury and illness reporting, we surveyed a nationally representative sample of manufacturing worksites. We selected a systematic random sample of 1,000 manufacturers from a total of 26,552 included in our sample frame of data. Our sample frame consisted of the set of manufacturers with 11 or more employees contained in a nationally representative BLS establishment survey fielded in 2010. This list was a relatively complete, current source of business names and addresses that had undergone a strict refinement process to remove establishments that were out of business, duplicates, or miscoded. We sorted the manufacturers by the sample weight for the BLS survey prior to the systematic random selection in order to ensure that a range of manufacturers was obtained. We designed and implemented a dual mode survey (mail and web-based) to obtain information from manufacturers on the types and characteristics of safety incentive programs and policies used at their workplaces, and the extent to which they performed contractual work for other companies. To develop our survey questions, we drew on information we gathered from interviews with occupational safety and health stakeholders and from scholarly studies on occupational safety and health. We pretested the survey with nine manufacturers that represented the three size populations of manufacturers studied (small, medium, and large) and submitted the questionnaire for an additional independent review by two survey specialists within GAO and experts in OSHA and BLS. We then made revisions based on their feedback prior to finalizing the survey. We conducted the survey using a self-administered questionnaire, and offered prospective respondents the option of completing and mailing a hard copy questionnaire or completing the questionnaire online (see app. III for a copy of the survey). To encourage participation, we mailed a reminder postcard, a second questionnaire, and made follow-up phone calls to all those who had not yet responded in regular intervals prior to closing the survey. A total of 663 manufacturers responded, resulting in a final weighted response rate of 62.4 percent. Because we surveyed a sample of manufacturers, the survey results are weighted estimates for a population of manufacturers and thus are subject to sampling errors associated with samples of this size and type. Our sample is only one of a large number of samples we might have drawn. As each sample could have provided different estimates, we expressed our confidence in the precision of our particular sample’s results as a 95 percent confidence interval (e.g., plus or minus 10 percentage points). We excluded 29 of the sampled manufacturers because we were able to determine that they were out of business at the time of our survey or they indicated that they did not engage in manufacturing. Therefore, all 29 of the manufacturers we excluded were considered out of scope. All estimates produced from the sample and presented in this report are representative of the in-scope population. The practical difficulties of conducting any survey may introduce errors resulting from the data collection procedures, commonly referred to as nonsampling errors, which can introduce unwanted variability into the survey results. There are four primary sources of nonsampling error: 1. Measurement—error in responses recorded on the survey instruments resulting from poorly worded, biased, or sensitive questions; ambiguous instructions; or lack of information available to respondents. 2. Nonresponse—bias from failing to get responses from establishments whose answers would have differed significantly from those that did participate. 3. Coverage—bias from failing to include all eligible establishments or from including ineligible establishments in the list from which we sampled. 4. Data processing—error arising from faulty handling or processing of the data. We took extensive steps in developing the questionnaire, collecting the data and analyzing the results to address the potential sources of nonsampling error. To minimize measurement error, GAO staff with subject-matter expertise collaborated with a survey design specialist to develop the questionnaire. We pretested the instrument using cognitive interviewing techniques and interviewed the pretest respondents to ensure that (1) the questions and instructions were clear, unambiguous, and in the correct order; (2) the terms we used were precise; (3) the survey did not place an undue burden on the respondents completing it; and (4) the survey was unbiased. To assess the risk of nonresponse bias, we obtained answers over the phone to three survey questions from 19 nonrespondents. We statistically compared the answers from the nonrespondents with those of our respondents on these three questions and found no statistically significant differences. Our sample frame minimized the risk of coverage error by drawing on a nationally representative list of manufacturers that was thoroughly reviewed and cleaned to remove ineligible establishments. Finally, we took several steps to reduce processing errors: (1) Quality control measures were implemented during preparation and mailout of survey packages to ensure that the respondents would receive the package with the proper login identification number and that the packages contained the correct contents. (2) We contracted with an outside company to enter the data from the paper questionnaires into a database, and we checked a 10 percent sample of the database as a quality control measure. (3) Respondents who completed questionnaires online entered their answers directly which eliminated the errors associated with a manual data entry process. (4) After we analyzed the data, a second independent data analyst checked all of the computer programs for accuracy. To examine OSHA’s efforts to address safety incentive programs, we reviewed relevant federal laws and regulations, OSHA’s policies and procedures, and interviewed OSHA officials regarding the agency’s activities. We interviewed selected OSHA officials from the agency’s national office as well as several regional and area offices to learn about (1) their efforts to address the potential impact of safety incentive programs and workplace policies on injury and illness reporting, (2) the recordkeeping enforcement initiative, and (3) their views on safety incentive programs and the potential relationship between these programs and injury and illness reporting. We interviewed officials from three regional offices and five area offices representing 5 of the 10 different OSHA regions. In all of these interviews we attempted to meet with regional and area office officials with experience in the recordkeeping enforcement initiative and those that oversee cooperative programs and other outreach and training efforts. We visited five OSHA offices and spoke with officials from four state occupational safety and health agencies. We selected these offices based on their geographic dispersal and representation of OSHA regions. To assess the results of OSHA’s recordkeeping enforcement initiative, we analyzed data from the OSHA Recordkeeping Inspection Assistant database, which contains records of the inspections OSHA conducted in 2009, 2010, and 2011. Prior to our analysis, we assessed the reliability of the OSHA Recordkeeping Inspection Assistant database by reviewing information obtained from OSHA about the database, and interviewing a knowledgeable agency official. Where there were discrepancies in the data, we worked with this official to clarify and, in some cases, correct the data. For example, we found two records that were missing key identifying information about the OSHA region in which the inspections occurred. On the basis of our assessment, we concluded that the updated data were sufficiently reliable for our reporting purposes. We conducted this performance audit from September 2010 to April 2012 in accordance with generally accepted government auditing standards. These standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Alavosius, Mark, Jim Getting, Joseph Dagen, William Newsome, and Bill Hopkins. “Use of a Cooperative to Interlock Contingencies and Balance the Commonwealth.” Journal of Organizational Behavior Management, vol. 29, no. 2 (2009): 193-211. Brown, Jean Geiger, Alison Trinkoff, Kenneth Rempher, Kathleen McPhaul, Barbara Brady, Jane Lipscomb, and Charles Muntaner. “Nurses Inclination to Report Work-Related Injuries: Organizational, Work-Group, and Individual Factors Associated with Reporting.” AAOHN Journal, vol. 53, no. 5 (2005): 213-217. Chowdhury, Sanjib K., and Megan Lee Endres. “The Impact of Client Variability on Nurses’ Occupational Strain and Injury: Cross-Level Moderation by Safety Climate.” Academy of Management Journal, vol. 53, no. 1 (2010): 182-198. DeJoy, David M., Lindsay J. Della, Robert J. Vandenberg, Mark G. Wilson. “Making Work Safer: Testing a Model of Social Exchange and Safety Management.” Journal of Safety Research, vol. 41, no.2 (2010): 163-171. Evans, Demetrice D., Judd H. Michael, Janice K. Wiedenbeck, and Charles D. Ray. “Relationships Between Organizational Climates and Safety-Related Events at Four Wood Manufacturers.” Forest Products Journal, vol. 55, no. 6 (2005): 23-28. Fugas, Carla S., José L. Meliá, and Silvia A. Silva. “The “Is” and the “Ought”: How Do Perceived Social Norms Influence Safety Behaviors at Work?” Journal of Occupational Health Psychology, vol. 16, no. 1 (2011): 67-79. Gangwar, Manish, and Paul M. Goodrum. “The Effect of Time on Safety Incentive Programs in the US Construction Industry.” Construction Management and Economics, vol. 23, no. 8 (2005): 851-859. Hinze, Jimmie. “Safety Incentives: Do They Reduce Injuries?” Practice Periodical on Structural Design and Construction, vol. 7, no. 2 (2002): 81-84. Hofmann, David A., and Barbara Mark. “An Investigation of the Relationship Between Safety Climate and Medication Errors As Well As Other Nurse and Patient Outcomes.” Personnel Psychology, vol. 59, no. 4 (2006): 847-869. Hoonakker, Peter, Todd Loushine, Pascale Carayon, James Kallman, Andrew Kapp, and Michael J. Smith. “The Effect of Safety Initiatives on Safety Performance: A Longitudinal Study.” Applied Ergonomics, vol. 36, no. 4 (2005): 461-469. Huang,Yueng-Hsiang, Michael Ho, Gordon S. Smith, and Peter Y. Chen. “Safety Climate and Self-Reported Injury: Assessing the Mediating Role of Employee Safety Control.” Accident Analysis and Prevention, vol. 38, no. 3 (2006): 425-433. Lauver, Kristy J., Chris Quinn Trank, and Huy Le. “Information by Design: How Employee Perceptions of Organizational Design Relate to Injury Reporting.” Journal of Leadership and Organizational Studies, vol. 18, no. 3 (2011): 344-352. Lauver, Kristy J., and Scott W. Lester. “Get Safety Problems to the Surface: Using Human Resource Practices to Improve Injury Reporting.” Journal of Leadership and Organizational Studies, vol. 14, no. 2 (2007): 168-179. Leiss, Jack K. “Management Practices and Risk of Occupational Blood Exposure in U.S. Paramedics: Needlesticks.” American Journal of Industrial Medicine, vol. 53, no. 9 (2010): 866-874. Ludwig,Timothy D., Jay Biggs, Sandra Wagner, and E. Scott Geller. “Using Public Feedback and Competitive Rewards to Increase the Safe Driving of Pizza Deliverers.” Journal of Organizational Behavior Management, vol. 21, no. 4 (2001): 75-104. Mark, Barbara A., Linda C. Hughes, Michael Belyea, Yunkyung Chang, David Hofmann, Cheryl B. Jones, and Cynthia T. Bacon. “Does Safety Climate Moderate the Influence of Staffing Adequacy and Work Conditions on Nurse Injuries?” Journal of Safety Research, vol. 38, no. 4 (2007): 431-446. Michael, Judd H., Zhen George Guo, Janice K. Wiedenbeck, Charles D. Ray. “Production Supervisor Impacts on Subordinates’ Safety Outcomes: An Investigation of Leader-Member Exchange and Safety Communication.” Journal of Safety Research, vol. 37, no. 5 (2006): 469-477. Morantz, Alison D., and Alexandre Mas. “Does Post-Accident Drug Testing Reduce Injuries? Evidence from a Large Retail Chain.” American Law and Economics Review, vol. 10, no. 2 (2008): 246-302. Morrow, Stephanie L., Alyssa K. McGonagle, Megan L. Dove-Steinkamp, Curtis T. Walker Jr., Matthew Marmet, and Janet L. Barnes-Farrell. “Relationships Between Psychological Safety Climate Facets and Safety Behavior in the Rail Industry: A Dominance Analysis.” Accident Analysis and Prevention, vol. 42, no. 5 (2010): 1460-1467. Probst, Tahira M., and Armando X. Estrada. “Accident Under-Reporting Among Employees: Testing the Moderating Influence of Psychological Safety Climate and Supervisor Enforcement of Safety Practices.” Accident Analysis and Prevention, vol. 42, no. 5 (2010): 1438-1444. Probst, Tahira M., Ty L. Brubaker, and Anthony Barsotti. “Organizational Injury Rate Underreporting: The Moderating Effect of Organizational Safety Climate.” Journal of Applied Psychology, vol. 93, no. 5 (2008): 1147-1154. Seo, Dong-Chul. “An Explicative Model of Unsafe Work Behavior.” Safety Science, vol. 43, no. 3 (2005): 187-211. Smith, Gordon S., Yueng-Hsiang Huang, Michael Ho, and Peter Y. Chen. “The Relationship Between Safety Climate and Injury Rates Across Industries: The Need to Adjust for Injury Hazards.” Accident Analysis and Prevention, vol. 38, no. 3 (2006): 556-562. Vredenburgh, Alison G. “Organizational Safety: Which Management Practices are Most Effective in Reducing Employee Injury Rates?” Journal of Safety Research, vol. 33, no. 2 (2002): 259-276. Wallace, Craig, and Gilad Chen. “A Multilevel Integration of Personality, Climate, Self-Regulation, and Performance.” Personnel Psychology, vol. 59, no. 3 (2006): 529-557. Zohar, Dov, and Gil Luria. “The Use of Supervisory Practices as Leverage to Improve Safety Behavior: A Cross-Level Intervention Model.” Journal of Safety Research, vol. 34, no. 5 (2003): 567-577. In addition to the contact named above, Gretta L. Goodwin, Assistant Director, and Joel Green, Analyst-in-Charge, managed all aspects of this assignment; James E. Lloyd III and Michelle Loutoo Wilson made significant contributions to all phases of the work; Grace Cho made substantial contributions to data analysis and message and report development; Carl Barden and Pamela Davidson provided assistance in designing the study, conducting data analysis, and developing the report; Lorraine Ettaro, Stuart Kaufman, and Carl Ramirez helped with survey administration; Delores Hemsley assisted in data collection; Catherine Hurley assisted in data analysis; Ashley McCall provided literature search assistance; Barbara Chapman, Cynthia Saunders, and Elizabeth Wood assisted in the methodological review of studies; Susannah Compton assisted in message and report development; James Bennett created the report’s graphics; Sarah Cornetto provided legal advice; and Amber Yancey-Carroll and Anna Bonelli reviewed the report to check the facts presented. | OSHA relies on employer injury and illness records to target its enforcement efforts. Questions have been raised as to whether some safety incentive programs and other workplace safety policies may discourage workers' reporting of injuries and illnesses. GAO examined (1) what is known about the effect of workplace safety incentive programs and other workplace safety policies on injury and illness reporting, (2) the prevalence of safety incentive programs as well as other policies that may affect reporting, and (3) actions OSHA has taken to address how safety incentive programs and other policies may affect injury and illness reporting. GAO reviewed academic literature, federal laws, regulations, and OSHA guidance; surveyed a nationally representative sample of manufacturing worksites; and interviewed federal and state occupational safety and health officials, union and employer representatives, and researchers. Little research exists on the effect of workplace safety incentive programs and other workplace safety policies on workers' reporting of injuries and illnesses, but several experts identified a link between certain types of programs and policies and reporting. Researchers distinguish between rate-based safety incentive programs, which reward workers for achieving low rates of reported injuries or illnesses, and behavior-based programs, which reward workers for certain behaviors, such as recommending safety improvements. Of the six studies GAO identified that assessed the effect of safety incentive programs, two analyzed the potential effect on workers reporting of injuries or illnesses, but they concluded that there was no relationship between the programs and injury and illness reporting. Experts and industry officials, however, suggest that rate-based programs may discourage reporting of injuries and illnesses. Experts and industry officials also reported that certain workplace polices, such as post-incident drug and alcohol testing, may discourage workers from reporting injuries and illnesses. Researchers and workplace safety experts also noted that how safety is managed in the workplace, including employer practices such as fostering open communication about safety issues, may encourage reporting of injuries and illnesses. In 2010, from its survey, GAO estimated that 25 percent of U.S. manufacturers had safety incentive programs, and most had other workplace safety policies that, according to experts and industry officials, may affect injury and illness reporting. GAO estimated that 22 percent of manufacturers had rate-based safety incentive programs, and 14 percent had behavior-based programs. Almost 70 percent of manufacturers also had demerit systems, which discipline workers for unsafe behaviors, and 56 percent had post-incident drug and alcohol testing policies according to GAOs estimates. Most manufacturers had more than one safety incentive program or other workplace safety policy and more than 20 percent had several. Such programs and policies were more common among larger manufacturers. Although the Occupational Safety and Health Administration (OSHA) is not required to regulate safety incentive programs, it has taken limited action to address the potential effect of such programs and other workplace safety policies on injury and illness reporting. These programs and policies, however, are not addressed in key guidance such as OSHA's field operations manual for inspectors. OSHA has cooperative programs that exempt employers with exemplary safety and health management systems from routine inspections. One such program prohibits participants from having rate-based safety incentive programs, but guidance on OSHAs other cooperative programs does not address safety incentive programs. Similarly, OSHA inspectors and outreach specialists provide information to employers about the potential benefits and risks of safety incentive programs, but the guidance provided to inspectors in its field operations manual does not address these programs. GAO recommends that OSHA provide guidance about safety incentive programs and other workplace safety policies consistently across the agency's cooperative programs, and add language about safety incentive programs and other workplace safety policies to the guidance provided to inspectors in its field operations manual. OSHA agreed with the recommendations, and noted its plans to address them. |
Of the Education programs funded in the Recovery Act, the newly created SFSF program was the largest in terms of funding. It included approximately $48.6 billion awarded to states by formula and up to $5 billion awarded as competitive grants. SFSF was created, in part, to help state and local governments stabilize their budgets by minimizing budgetary cuts in education and other essential government services, such as public safety. SFSF funds for education distributed under the Recovery Act were required to first be used to alleviate shortfalls in state support for education to LEAs and public institutions of higher education (IHE). States were required to use SFSF education stabilization funds to restore state funding to the greater of fiscal year 2008 or 2009 levels for state support to LEAs and public IHEs. When distributing these funds to LEAs, states must use their primary education funding formula, but they can determine how to allocate funds to public IHEs. In general, LEAs maintain broad discretion in how they can use education stabilization funds, but states have some ability to direct IHEs in how to use these funds. Several other programs received additional funding through the Recovery Act. For example, the Recovery Act provided $10 billion to help LEAs educate disadvantaged youth by making additional funds available beyond those regularly allocated for ESEA Title I, Part A. These additional funds are distributed through states to LEAs using existing federal funding formulas, which target funds based on such factors as high concentrations of students from families living in poverty. The Recovery Act also provided $12.2 billion in supplemental funding for programs authorized by IDEA, the major federal statute that supports the provisions of early intervention and special education and related services for infants, toddlers, children, and youth with disabilities. Part B of IDEA funds programs that ensure preschool and school-aged children with disabilities have access to a free appropriate public education and is divided into two separate grants—Part B grants to states (for school-age children) and Part B preschool grants. While one purpose of the Recovery Act was to preserve and create jobs, it also required states to report information quarterly to increase transparency and SFSF required recipients to make assurances relating to progress on educational reforms. To receive SFSF, states were also required to provide several assurances, including that they will maintain state support for education at least at fiscal year 2006 levels; and that they would implement strategies to advance four core areas of education reform. The four core areas of education reform, as described by Education, are: 1. Increase teacher effectiveness and address inequities in the distribution of highly qualified teachers. 2. Establish a pre-K-through-college data system to track student progress and foster improvement. 3. Make progress toward rigorous college- and career-ready standards and high-quality assessments that are valid and reliable for all students, including students with limited English proficiency and/or disabilities. 4. Provide targeted, intensive support, and effective interventions to turn around schools identified for corrective action or restructuring. Education required states receiving SFSF funds to report about their collection and reporting of 34 different indicators and 3 descriptors related to these four core areas of education reform or provide plans for making information related to the education reforms publicly available no later than September 30, 2011. Previously, we reported that, while states are responsible for assuring advancement of these reform areas, LEAs were generally given broad discretion in how to spend the SFSF funds. It is not clear how LEA progress in advancing these four reforms would affect states’ progress toward meeting their assurances. Additionally, Recovery Act recipients and subrecipients are responsible for complying with other requirements as a condition of receiving federal funds. For example, for Recovery Act education programs we reviewed, states and LEAs must meet applicable maintenance of effort (MOE) requirements, which generally mandate them to maintain their previous level of spending on these programs. Generally, this also helps to ensure that states continue to fund education even with the influx of the Recovery Act funds. Specifically, the newly created SFSF program required states to maintain support for elementary and secondary education, in fiscal years 2009, 2010, and 2011, at least at the level that the state provided in fiscal year 2006, but did not place any MOE requirements on subrecipients. IDEA generally prohibits states and LEAs from reducing their financial support, or MOE, for special education and related services for children with disabilities below the level of that support for the preceding year. For ESEA, Title I, states and LEAs are also required to maintain their previous level of funding with respect to the provision of free public education. As long as states met certain criteria, including that the states maintained MOE for SFSF funding, this funding could be counted to meet MOE for other programs including ESEA, Title I, and IDEA. In addition, section 1512 of the Recovery Act requires recipients to report certain information quarterly. Specifically, the Act requires, among other types of information, that recipients report the total amount of Recovery Act funds received, associated obligations and expenditures, and a detailed list of the projects or activities for which these obligations and expenditures were made. For each project or activity, the information must include the name and description of the project or activity, an evaluation of its completion status, and an estimate of the number of jobs funded through that project or activity. The job calculations are based on the total hours worked divided by the number of hours in a full-time schedule, expressed in FTEs—but they do not account for the total employment arising from the expenditure of Recovery Act funds. The prime recipient is responsible for the reporting of all data required by section 1512 of the Recovery Act each quarter for each of the grants it received under the act. According to our nationally representative survey of LEAs conducted in spring 2011, nearly all LEAs reported that they had obligated the majority of their Recovery Act funds, primarily for retaining instructional positions, which assisted LEAs in restoring shortfalls in state and local budgets that LEAs have had to cope with over the past few school years. As a result of the fiscal stress states faced during the recession, a number of state educational agencies (SEA) and LEAs have had difficulty meeting their required MOE requirements for IDEA. States that do not either fully meet their MOE requirements or receive a waiver from Education may face a reduction in future IDEA allocations. State and LEA officials we visited stated that the actions they have taken to deal with decreased budgets and the expiration of their Recovery Act funds—such as reducing instructional supplies and equipment and cutting instructional positions— could have a negative impact on the educational services they provide to students. According to our survey, the majority of LEAs reported they had already obligated most of their SFSF; ESEA Title I, Part A; and IDEA, Part B funds. Nearly all of the LEAs—99 percent for SFSF; and 97 percent for ESEA Title I and IDEA, Part B—reported that they expected to obligate all of their Recovery Act funds prior to September 30, 2011. However, approximately one-quarter (23 percent) of LEAs reported that uncertainty about allowable uses of the funds impacted their ability to expend them in a timely and effective manner. According to data from Education, as of September 9, 2011, about 4 percent of the states’ obligated Recovery Act funds remain available for expenditure. See appendix II for percentages of awarded Recovery Act funds drawn down by states. As of September 9, 2011, two states had drawn down 100 percent of their ESEA Title I, Part A funds. Additionally, 27 states had drawn down all of their SFSF education stabilization funds, while Wyoming, for example, had the lowest drawdown rate for SFSF— 34 percent. Drawdowns can lag behind actual expenditures for various reasons. For example, SEA officials in Wyoming stated that funds for certain uses, such as professional development, tended to be expended in large amounts during the middle and end of the school year which did not require them to draw down funds at a constant rate throughout the school year. Additionally, SEA officials in Alaska told us their drawdown rates appeared low because the state draws down funds on a quarterly basis to reimburse LEAs after their allocations have been spent. SEA officials in the states we visited told us they provided guidance on obligating Recovery Act funds in an effort to assist LEAs in meeting the deadline for obligation of the funds. For example, SEA officials in Massachusetts told us that they sent four communiqués and conducted teleconferences with LEAs with the goal of ensuring that SFSF funds were spent appropriately and in a timely fashion. In Wyoming, SEA officials stated they requested that districts submit Periodic Expenditure Reports on a quarterly basis so that they could assess districts’ spending of Recovery Act funds. They also told us that they contacted districts to determine if they were having challenges obligating the funds by the September 2011 deadline and sent e-mails to their districts notifying them of the amount of funds they had remaining. Retaining staff was the top use cited by LEAs of SFSF; IDEA, Part B; and ESEA Title I, Part A Recovery Act funding over the entire grant period. According to our survey, about three-quarters of LEAs spent 51 percent or more of their SFSF funds on job retention (see fig. 1). A smaller, but substantial, percentage of LEAs also reported using 51 percent or more of their ESEA Title I, Part A and IDEA, Part B Recovery Act funding—an estimated 43 percent and 38 percent, respectively—for job retention. Specifically, in the 2010-2011 school year, the large majority of LEAs (84 percent) used Recovery Act funds to retain instructional positions, which typically include classroom teachers and paraprofessionals. Salaries and benefits comprise the majority of public school’s budgets, and funds authorized by the Recovery Act provided LEAs additional funds to pay for the retention of education staff. In addition to retaining instructional positions, LEAs spent Recovery Act funds on one-time, nonrecurring purchases and sustainable items that built capacity without creating recurring costs. According to our survey, 78 percent of LEAs reported using 1 to 25 percent of at least one of their Recovery Act funding sources—SFSF; ESEA Title I, Part A; or IDEA, Part B—on one-time expenditures, such as professional development for instructional staff, computer technology, and instructional materials. For example, LEA officials in one district in Mississippi told us that they used Recovery Act funds to invest in technology, security equipment, and a handicapped-accessible school bus for students with special needs. In the New Bedford Public Schools district in Massachusetts, LEA officials stated that Recovery Act funds were used to rehabilitate and redeploy computers around the district, purchase iPad connections to enable online learning, and provide professional development to teachers on various technological efforts. See figure 2. Other one-time purchases made with Recovery Act funds enhanced districts’ capacity to provide services in the future, sometimes with anticipated long-term cost savings. In Massachusetts, we visited two LEAs—Newton Public Schools and New Bedford Public Schools—that used IDEA, Part B Recovery Act funds to provide or expand their services for students with special needs instead of paying more expensive schools or facilities to provide the alternative programs and services. LEA officials in Fairfield-Suisun Unified School District in California told us they used IDEA, Part B Recovery Act funds to implement two initiatives they expected to lead to significant cost savings in the future. The first initiative involved partnering with the nearby University of the Pacific to recruit recent speech pathology graduates. In exchange for externships and student loan stipends paid for with Recovery Act funds, the students committed to working in the district for 3 years upon graduation. These newly licensed therapists would be paid salaries around $45,000 per year, considerably less than the contracted therapists that cost the district over $100,000 per year. Further, because of the 3-year commitment, officials stated the graduates were more likely to continue working in the district as permanent employees. Officials estimated that this initiative could save them $800,000 in the 2011-2012 school year. The second initiative used IDEA, Part B Recovery Act funds to start a public school for emotionally disturbed students who previously attended non-public schools at the district’s expense. According to the officials, remodeling the old school building was both cost-effective and programmatically effective, since non-public schools for emotionally disturbed students could cost up to $85,000 per student, with additional costs for occupational and speech therapy if needed. The new public school costs from $25,000 to $35,000 per student, according to district officials. Additionally, officials at Hinds Community College in Mississippi used SFSF education stabilization funds to invest in energy conservation. Specifically, the college contracted with an organization to help educate students and staff on energy conservation efforts, such as turning off lights and computers. The officials stated that they saved approximately $1 million on utilities in fiscal year 2010, which offset the need to increase tuition. Compared to the year prior to receiving Recovery Act funds, a large majority of LEAs reported being able to, with the use of Recovery Act funds, maintain or increase the level of service they could provide to students (see table 1). LEA officials in the Center Point-Urbana Community School District in Iowa told us that Recovery Act funds allowed the district to maintain its core curriculum, provide professional development to instructional staff, and maintain the collection of assessment data that helps them align the district’s curriculum with the state’s core curriculum. LEA officials in the Water Valley School District in Mississippi stated that SFSF funds allowed the district to maintain its reform efforts because they allowed students greater access to teachers. They explained that saving those teacher positions allowed them to keep class sizes small and offer more subjects, such as foreign language, fine arts, and business classes. However, an estimated 13 percent of LEAs were not able to maintain the same level of service even with Recovery Act SFSF funds. These LEAs reported a number of factors that had an effect on their decreased level of service, including increases in class size, reductions in instructional and non-instructional programs, and reductions in staff development. For example, LEA officials at the Tipton Community School District in Iowa stated that, even with Recovery Act funding, they could not afford to maintain their high school agriculture program and middle school vocal music program on a full-time basis. The fiscal condition of LEAs across the country is mixed, but many school districts continued to face funding challenges in the 2010-2011 school year. One sign of state fiscal stress has been mid-year budget reductions resulting from lower revenues than those forecasted. Nationwide, in state fiscal year 2011, one of the program areas where many states made mid- year general fund expenditure reductions was K-12 education, according to the Fiscal Survey of States. Out of the 23 states that reported making mid-year reductions, 18 states reduced K-12 education funding. Looking forward to fiscal year 2012, reductions for K-12 education had been proposed in 16 states, according to the Fiscal Survey of States. Given that nearly half of education funding, on average, is provided by the states, the impact of state-level reductions to education could significantly affect LEA budgets. Over the course of our work on the Recovery Act, our surveys of LEAs have shown a mixed but deteriorating fiscal situation for the nation’s LEAs. Specifically, our survey of LEAs conducted in the 2009-2010 school year indicated that an estimated one-third of LEAs reported experiencing funding decreases in that year. Our survey conducted in the 2010-2011 school year showed that an estimated 41 percent of LEAs reported experiencing funding decreases in that year. Moreover, nearly three-quarters (72 percent) anticipated experiencing funding-level decreases in school year 2011-2012 (see fig. 3). Further, LEAs anticipated decreases of varying amounts—24 percent expected decreases between 1 and 5 percent, 29 percent expected decreases between 6 and 10 percent, and 19 percent expected decreases over 10 percent. All types of LEAs have had to cope with declining budgets in the past few school years, but LEAs with high student poverty rates were especially hard hit. LEAs that had high student poverty rates (54 percent) more often reported experiencing funding decreases compared to those with low student poverty rates (38 percent). Additionally, 45 percent of suburban LEAs reported experiencing a decrease in funding from the 2009-2010 school year to the 2010-2011 school year. Likewise, 41 percent of rural LEAs and 33 percent of urban LEAs reported experiencing funding decreases in the same year. In addition, 62 percent of LEAs that experienced a decrease in funding in the 2010-2011 school year reported that they formed or planned to form an advisory committee or hold meetings with community stakeholders to develop budget recommendations as a cost-saving strategy. To address their funding decreases in school year 2010-2011, about one- quarter or more of LEAs reported taking actions such as reducing instructional supplies and equipment and cutting instructional positions. Moreover, about one-half of LEAs that expected a decrease in funding in the upcoming 2011-2012 school year reported that they would likely have to reduce instructional supplies and equipment or cut instructional and non-instructional positions in the 2011-2012 school year to address the budget shortfall (see fig. 4). LEAs across the country will soon exhaust their SFSF; ESEA Title I, Part A; and IDEA, Part B Recovery Act funds, which will place them at the edge of a funding cliff—meaning that they will not have these funds to help cushion budget declines in the upcoming 2011-2012 school year. However, many LEAs planned to spend Education Jobs Fund awards, which could mitigate some of the effects of the funding cliff. Congress created the Education Jobs Fund in 2010, which generally provides $10 billion to states to save or create education jobs for the 2010-2011 school year. States distribute the funds to LEAs, which may use the funds to pay salaries and benefits, and to hire, rehire, or retain education-related employees for the 2010-2011 school year. According to our survey, an estimated 51 percent of LEAs spent or planned to spend 75 to 100 percent of their Education Jobs fund allocation in the 2010-2011 school year and about 49 percent planned to spend the same amount in the 2011-2012 school year. The large majority of LEAs (72 percent) spent or planned to spend most of the funds on retaining jobs, as opposed to hiring new staff or rehiring former staff. State and LEA officials we visited stated that the actions they have taken to deal with decreased budgets and the expiration of their Recovery Act funds could have an impact on the educational services they provide. For example, officials at the Fairfield-Suisun Unified School District in California told us that they tried to make cuts that had the least impact on the classroom, but they had begun making cuts that would impact the students. For example, they reported that they will increase class sizes, cut administrative and student support staff, eliminate summer school programs, and close schools because of their decreased budget. LEA officials at the Newton Public School District in Massachusetts stated that they cut many support services and were reviewing under-enrolled classes to determine which programs to eliminate. They stated that they tried to insulate cuts to mitigate the impact on student learning, but stated that the cuts would nonetheless negatively impact the students’ educational services. In Hawaii, SEA officials told us that their state was considering certain cost-saving scenarios to help mitigate the state’s strained fiscal climate, including decreasing wages for all SEA employees, increasing class size, and eliminating school bus transportation for all students except those with special needs. Officials noted that eliminating bus transportation could lead to increased student absences and could be a challenge for students living in rural areas. Additionally, officials at the Center Point-Urbana School District in Iowa told us that they made several adjustments to save costs and be more efficient, such as reducing custodial staff. Because it is a small, rural district, Center Point-Urbana officials told us that any further cuts would jeopardize the quality of education it can provide to students. Further, a recent Center on Education Policy report found funding cuts also hampered progress on school reform. According to their national survey of school districts, they estimate that 66 percent of school districts with budget shortfalls in 2010-2011 responded to the cuts by either slowing progress on planned reform, or postponing or stopping reform initiatives. Further, about half (54 percent) of the districts that anticipated shortfalls in 2011-2012 expected to take the same actions next school year. According to our survey, over a quarter of LEAs decreased their spending on special education because of the local MOE spending flexibility allowed under IDEA and the large influx of Recovery Act IDEA funds. Under IDEA, LEAs must generally not reduce the level of local expenditures for children with disabilities below the level of those expenditures for the preceding year. The law allows LEAs the flexibility to adjust local expenditures, however, in certain circumstances. Specifically, in any fiscal year in which an LEA’s federal IDEA, Part B Grants to States allocation exceeds the amount the LEA received in the previous year, an eligible LEA may reduce local spending on students with disabilities by up to 50 percent of the amount of the increase. If an LEA elects to reduce local spending, those freed up funds must be used for activities authorized under the ESEA. Because Recovery Act funds for IDEA count as part of the LEA’s overall federal IDEA allocation, the total increase in IDEA funding for LEAs was far larger than the increases in previous years, which allowed many LEAs the opportunity to reduce their local spending. As we have previously reported, the decision by LEAs to decrease their local spending may have implications for future spending on special education. Because LEAs are required to assure that they will maintain their previous year’s level of local, or state and local, spending on the education of children with disabilities to continue to receive IDEA funds, if an LEA lowers its spending using this flexibility, the spending level that it must meet in the following year will be at this reduced level. If LEAs that use the flexibility to decrease their local spending do not voluntarily increase their spending in future years—after Recovery Act funds have expired—the total local, or state and local, spending for the education of students with disabilities may decrease, compared to spending before the Recovery Act. Many LEAs anticipate difficulty meeting the IDEA MOE requirement for the next few years and could experience financial consequences if they do not comply. Through our survey, we found that 10 percent of LEAs expected to have trouble meeting their MOE for school year 2010-11 and, in the 2011-12 school year, this percentage jumps to 24 percent of LEAs. For example, Florida officials reported that nearly two-thirds of the LEAs in their state may be in jeopardy of not meeting their MOE requirement. Further, Education officials told us the LEA MOE amount can be difficult to calculate because there are various exceptions and adjustments LEAs can make, such as considering spending changes in the case of students with high-cost services leaving a program, hiring lower salary staff to replace retirees, and extensive one-time expenditures like a computer system. Education officials reported that they provided technical assistance to help states and LEAs understand how to include these exceptions and adjustments in their MOE calculations. Of LEAs that exercised the flexibility to adjust their IDEA MOE amount, 15 percent reported they anticipated having difficulty meeting MOE in 2010- 11 even though their required local spending level was reduced. And in 2011-12, 33 percent of the LEAs that took advantage of the MOE adjustment still expected difficulty in meeting their MOE level. According to Education’s guidance, if an LEA is found to have not maintained its MOE, the state is required to return to Education an amount equal to the amount by which the LEA failed to maintain effort. Additionally, IDEA does not provide LEAs an opportunity to receive a waiver from MOE requirements. As of August 2011, seven states had applied for a total of 11 waivers of IDEA MOE requirements and other states reported they were considering applying for a waiver because of fiscal declines in their states. In addition to LEAs, states must also meet MOE requirements. To be eligible for Part B funding, states must provide an assurance that they will not reduce the amount of state financial support for special education below the amount of that support for the preceding fiscal year, and must operate consistent with that assurance. However, Education may waive this state MOE requirement under certain circumstances. While Education has granted full waivers for five instances, it has also denied or partially granted waivers in five instances for Iowa, Kansas, Oregon, and South Carolina (twice) and is currently reviewing an additional waiver request from Kansas (see table 2). In their waiver requests, all seven states cited declining fiscal resources as the reason for not being able to maintain their spending on special education, but waiver requests varied in amount from nearly half a million dollars in West Virginia to over $75 million in South Carolina. Education’s guidance states that it considers waiver requests on a case- by-case basis and seeks to ensure that reductions in the level of state support for special education are not greater in proportion than the reduction in state revenues. In addition, as part of its review of waiver requests, Education seeks to ensure that states are not reducing spending on special education programs more severely than other areas. When Education receives a request for a waiver, officials told us they request state budget data to better understand the state’s calculation of MOE, and to assess whether granting a waiver would be appropriate. According to Education officials, as well as state officials, this process can be lengthy and may involve a lot of back-and-forth between the department and the state to acquire the necessary information, understand the state’s financial situation, and determine the state’s required MOE level. Once all the data have been collected and reviewed to determine whether the state experienced exceptional or uncontrollable circumstances and whether granting a waiver would be equitable due to those circumstances, Education officials inform states that their waiver has either been approved, partially granted, or denied. For states whose waivers are denied or are partially granted, according to Education officials, the state must provide the amount of the MOE shortfall for special education during the state fiscal year in question or face a reduction in its federal IDEA grant award by the amount equal to the MOE shortfall. Education officials told us that because a state must maintain financial support for special education during a fiscal year, IDEA does not permit a state to make up this shortfall after that fiscal year is over. Education officials also told us that once a state’s funding is reduced, the effect may be long-lasting in that the IDEA requires that each subsequent year’s state allocation be based, in part, on the amount the state received in the prior year. Both Kansas and South Carolina now face reductions of IDEA awards for fiscal year 2012 of approximately $2 million and $36 million, respectively. Education officials reported that it is impossible to predict with certainty the effect this may have on the states’ future IDEA awards, but they indicated that these reductions may have a long-lasting negative effect on future IDEA awards. South Carolina has filed an “Appeal of Denial of Waiver Request/Reduction in Funds” with Education’s Office of Hearings and Appeals regarding Education’s decision to partially grant its waiver request for 2009-2010. Education plans to conduct two common types of systematic program assessment: program evaluation and performance measurement. In the coming years, Education plans to produce an evaluation that will provide an in-depth examination of various Recovery Act programs’ performance in addressing educational reform. In addition to this overall assessment of the programs’ results, for the SFSF program, Education plans to measure states’ ability to collect and publicly report data on preestablished indicators and descriptors of educational reform. Education intends for this reporting to be a means for improving accountability to the public in the shorter term. Education plans to conduct a national evaluation to assess the results of Recovery Act-funded programs and initiatives addressing educational reform. The evaluation is intended to focus on efforts to drive innovation, improve school performance, and reduce student achievement gaps. According to Education officials, the programs covered by the evaluation include SFSF; IDEA, Part B; ESEA Title I, Part A; Race to the Top; the Teacher Incentive Fund; and Ed Tech. Including these Recovery Act education programs in one evaluation will allow for a broad view of the results of programs focused on education reform. As part of this integrated evaluation, Education plans to issue four reports over the next several years that are descriptive in nature, with the final report in 2014 including analysis of outcome data. The four planned reports are described in table 3. In addition, studies are planned related to measuring progress in meeting performance goals under the Recovery Act, according to Education officials. For example, Education’s Policy and Program Studies Service will issue a report in 2012 that will examine teacher evaluation and other teacher-related issues based on state reported data under SFSF and through Education’s EDFacts database. Although the Recovery Act does not require states and LEAs to conduct evaluations of their Recovery Act-funded reform activities, officials in a few states and LEAs we talked with said they are considering conducting evaluations. For example, Mississippi has implemented LEA program evaluations of some Recovery Act funded initiatives using student achievement data. At the local level, between about 43 and 56 percent of LEAs reported that they are neither collecting nor planning to collect data that would allow for the evaluation of the use of SFSF; IDEA, Part B; or ESEA Title I, Part A funds, while between about 19 and 31 percent of LEAs indicated they were either collecting or planning to collect information for this purpose. (See table 4.) For example, officials at one LEA in Massachusetts said that they are evaluating their use of IDEA Recovery Act funds to provide special education programs within the district rather than through private schools. In addition to the more comprehensive evaluation, Education intends to assess each state’s progress on collecting and publicly reporting data on all of the 37 SFSF-required indicators and descriptors of educational reform because, according to Education officials, the public will benefit from having access to that information. States have until September 30, 2011, to report this performance data. As part of that assessment, Education officials said they have reviewed states’ SFSF applications and self-reported annual progress reports on uses of funds and the results of those funds on education reform and other areas. Coupled with reviews of the applications and annual reports, Education requires states that receive SFSF funds to maintain a public Web site that displays information responsive to the 37 indicator and descriptor requirements in the four reform areas. For example, on its Web site as of August 2011, Iowa’s SEA reported that it includes 9 of the 12 required reporting indicators for its statewide longitudinal data system. These Web-based, publicly-available data are intended for use within each state, according to Education officials, because individual states and communities have the greatest power to hold their SEAs and LEAs accountable for reforms. Specifically, Education intended this information to be available to state policymakers, educators, parents, and other stakeholders to assist in their efforts to further reforms by publicly displaying the strengths and weaknesses in education systems. Officials in most of the states we talked with said that the requirements to report this information are useful. For example, some state officials pointed out that publicly reporting such data could serve as a catalyst of reform by pointing out areas where the state could improve. In addition to each state publicly reporting this information, Education plans to report states’ progress toward complying with the conditions of the SFSF grant at the national level. Education officials said they will summarize states’ ability to collect and report on the required indicators and descriptors across the four reform areas. Not all states were able to collect and report this information as of March 2011, but states have until September 30, 2011, to do so. If a state could not report the information, it was required to create a plan to do so as soon as possible and by the September deadline. As part of its reporting, Education will summarize states’ responses for certain indicators and descriptors and present it on Education’s Web site. For many other indicators and all three descriptors, Education officials said that it faces challenges in presenting a national perspective on states’ progress. For example, there are no uniform teacher performance ratings among LEAs within states and across states, which limits Education’s ability to present comparisons. Moreover, states are not required to present information in a consistent manner, making it difficult to present aggregated or comparative data for many of the indicators. Also, Education officials said that because information addressing the three descriptors is presented in narrative, it is difficult to provide summary information. According to Education officials, they did not provide specific guidance on how states are to report the other data elements because they did not want to be too prescriptive. However, according to Education officials, through their reviews of state Web sites they found cases where the sites do not clearly provide the information, and states have acted on Education’s suggested improvements to the sites. Additionally, Education plans to use states’ progress toward collecting and reporting this information to inform whether states are qualified to participate in or receive funds under future reform-oriented grant competitions, such as they did for the Race to the Top program. GAO has found that using applicant’s past performance to inform eligibility for future grant competitions can be a useful performance accountability mechanism. Education communicated its intention to use this mechanism in the final requirements published in the Federal Register on November 12, 2009, but Education has not yet specified how this mechanism will be used. As a result, officials in most of the states we spoke with said they were unaware of how Education planned to use the indicators or how Education would assess them with regards to their efforts to meet assurances. Education officials said they also plan to use the information to inform future policy and technical assistance to states and LEAs. To help ensure accountability of Recovery Act funds, a wide variety of entities oversee and audit Recovery Act programs, and Education officials told us they routinely review monitoring and audit results from many of these sources. Federal and state entities we spoke with described various accountability mechanisms in place over Recovery Act education programs, including financial reviews, program compliance reviews, and recipient report reviews. For example, state auditors and independent public accountants conduct single audits that include tests of internal control over and compliance with grant requirements such as allowable costs, maintenance of effort, and cash management practices. The Department of Education, the Education Office of Inspector General (OIG), and various state entities also examine internal controls and financial management practices, as well as data provided quarterly by grant recipients and subrecipients as required by section 1512 of the Recovery Act. Additionally, many of these entities conduct programmatic reviews that include monitoring compliance with program requirements, such as funding allowable activities and achieving intended program goals. These accountability efforts have helped identify areas for improvement at the state and local levels, such as issues with cash management, subrecipient monitoring, and reporting requirements. For example, since 2009 the Education OIG has recommended that several states improve their cash management procedures after finding that states did not have adequate processes to both minimize LEA cash balances and ensure that LEAs were properly remitting interest earned on federal cash advances. The Education OIG also found that several states had not developed plans to monitor certain Recovery Act funds or had not incorporated Recovery Act-specific requirements into their existing monitoring protocols. With regard to recipient reporting, various recipients had difficulty complying with enhanced reporting requirements associated with Recovery Act grants. For example, an independent public accounting firm contracted by the Mississippi Office of the State Auditor found 32 instances of noncompliance with reporting requirements in the 43 LEAs it tested. Some of the findings included failure to file quarterly recipient reports on Recovery Act funds as required and providing data in the quarterly reports that differed from supporting documentation. During the fiscal year 2010 single audits of the state governments we visited, auditors identified noncompliance with certain requirements that could have a direct and material effect on major programs, including some education programs in California and Massachusetts. In Iowa and Mississippi, the auditors found that the states complied in all material respects with federal requirements applicable to each of the federal programs selected by the auditors for compliance testing. Auditors also identified material weaknesses and significant deficiencies in internal control over compliance with SFSF; ESEA Title I, Part A; and IDEA, Part B, for some SEAs and LEAs we visited. For example, auditors reported that California’s SEA continued to have a material weakness because it lacked an adequate process of determining the cash needs of its ESEA Title I subrecipients. At the state level, Iowa, Massachusetts, and Mississippi were found to have no material weaknesses in internal control over compliance related to Recovery Act education funds, though auditors did identify significant deficiencies in Iowa. For example, in Iowa auditors found several instances of excess cash balances for the SFSF grant. According to our survey of LEAs, nearly 8 percent of all LEAs reported having Single Audit findings related to Recovery Act education funds. For example, an auditor found that one LEA in Iowa had a material weakness because it did not properly segregate duties for SFSF—one employee was tasked with both preparing checks and recording transactions in the general ledger. In Massachusetts, the auditors identified a material weakness because an LEA was not complying with Davis-Bacon Act requirements, such as failing to obtain certified payrolls for vendors contracted for special education construction projects in order to verify that employees were being paid in accordance with prevailing wage rates. As part of the single audit process, grantees are responsible for follow-up and corrective action on all audit findings reported by their auditor, which includes preparing a corrective action plan at the completion of the audit. For all the 2010 single audit findings described above, the recipients submitted corrective action plans. Our survey of LEAs showed that federal and state entities also have been monitoring and auditing their Recovery Act funds through both site visits and desk reviews. As figure 5 indicates, over a third of LEAs reported their SEA conducted a desk review to oversee their use of Recovery Act funds, and nearly a fifth reported their SEA conducted a site visit. States are responsible for ensuring appropriate use of funds and compliance with program requirements at the subrecipient level, and Education in turn works to ensure that the states are monitoring and implementing federal funds appropriately. Education does select some school districts for desk reviews and site visits, as shown in figure 5. While few LEAs reported that Education monitored their Recovery Act funds directly, Education program offices told us that as part of their oversight efforts, they routinely review and follow up on information from a broad range of other entities’ monitoring and audit reports. SFSF; ESEA Title I, Part A; and IDEA, Part B program officials told us that information drawn from multiple sources helps to (1) inform their monitoring priorities, (2) ensure states follow up on monitoring findings, and (3) target technical assistance in some cases. Education’s approach to ensuring accountability of SFSF funds, which was designed to take into consideration the short timeframes for implementing this one-time stimulus program, as well as the need for unprecedented levels of accountability, has helped some states address issues quickly. Two of Education’s goals in monitoring these funds are to (1) identify potential or existing problem areas or weaknesses and (2) identify areas where additional technical assistance is warranted. SFSF officials told us that they have prioritized providing upfront technical assistance to help states resolve management issues before they publish monitoring reports. This is intended to be an iterative process of communicating with states about issues found during monitoring, helping them develop action plans to address findings, and working with them to ensure successful completion of any corrections needed. Some states we spoke with told us that Education’s approach to SFSF monitoring allowed them to resolve issues prior to Education issuing a final monitoring report to the state, and also allowed them to correct systemic or cross-programmatic issues beyond SFSF. For example, New York officials told us that after their monitoring review, Education provided a thorough explanation of the corrective actions that were required. This allowed the state the opportunity to resolve the issues, which were specific to individual subrecipients, prior to the issuance of Education’s final monitoring report. North Carolina officials said Education’s monitoring helped them to implement new cash management practices, and reported that Education staff were proactive about communicating with the state to enable the issue to be resolved. District of Columbia officials also stated that Education’s SFSF monitoring raised awareness of subgrantee cash management requirements and the need for state policies for those requirements across programs. District of Columbia, New York, and North Carolina officials all reported that the technical assistance they received as part of Education’s SFSF monitoring follow up was timely and effective. While some states reported helpful and timely contact from Education after their monitoring reviews were completed, we found that communication varied during the follow-up process, which left some states waiting for information about potential issues. According to data provided by Education, most states that were monitored before June 2011 received contact at least once before the department issued a draft report with monitoring results. However, several states received no contact from Education before they received draft reports presenting areas of concern. Education officials explained that if complete documentation was available by the end of the state’s monitoring review, the situation would require less follow-up communication than if the state needed to submit additional documentation. Additionally, while the department did contact most states after monitoring reviews, they did not consistently communicate feedback to states regarding their reviews. Some states that did not receive monitoring feedback promptly, either orally or in writing, have expressed concerns about their ability to take action on potential issues. For example, an Arizona official told us in June 2011 that the state had not been contacted about the results of its monitoring visit in December 2010 and that follow up contact from Education would have been helpful to make any necessary adjustments during the final months of the SFSF program in the state. According to Education officials, the department did communicate with Arizona on several occasions following the monitoring visit, primarily to request additional information or clarification on such things as the state’s method for calculating MOE. Education officials told us in that as a result of receiving further information and documentation from the state, they were finalizing the state’s draft report and would share the information with the state as soon as possible. In July 2011 California officials told us they had not heard about the results of the monitoring review that was completed 10 months earlier in September 2010. California officials told us that Education raised a question during its review, but the state was unsure about the resolution and whether they would be required to take corrective action. Education officials told us in September 2011 that they had numerous communications with California officials, often to clarify issues such as the state’s method for calculating MOE, and that they were still in communication with the state as part of the process of identifying an appropriate resolution. As a result of Education’s approach to monitoring, the length of time between the Department’s monitoring reviews and the issuance of the monitoring reports varied greatly—from as few as 25 business days to as many as 265 business days (see fig. 6). The need to address issues identified during monitoring and the subsequent frequency of communication during monitoring follow up can affect the amount of time it takes to issue reports with monitoring results. For example, after Maine’s desk review in September 2010, Education contacted the state 10 times to request additional information and clarification before sending the state a draft interim report 7 months later in April 2011. In contrast, Rhode Island was contacted once after its site visit, and Education provided a draft report with results about a month later. In part because of the need for continuous collaboration with states, Education’s written SFSF monitoring plan does not include specific internal time frames for when it will communicate the status of monitoring issues to states after desk reviews and site visits. In the absence of such time frames, the length of time between Education’s monitoring reviews and issuance of draft interim reports with monitoring feedback varied widely across states. Education officials told us they believe states benefit more from the iterative monitoring process that emphasizes early resolution of issues than through the issuance of written monitoring reports. Due to its SFSF monitoring approach, Education has provided limited information publicly on the results of its oversight efforts, but it has plans to provide more detailed reports on what it has found during monitoring in the future and has taken steps to share information on common issues found among the states. While most SFSF monitoring reviews have been completed for the 2010-2011 cycle, Education has not communicated information about most of these reviews to the public and the states’ governors. Of the 48 completed reviews, only three reports for site visits and 12 reports for desk reviews have been published (see fig. 7). Additionally, the reports that have been published are brief and present a general description of the area of concern without detailing what the specific issues and their severity were. For example, in Tennessee’s final letter report, Education wrote that it found issues with LEA funding applications, fiscal oversight, allowable activities, cash management, and subrecipient monitoring. However, Education officials told us that they planned to publish more detailed final reports after the 2011-2012 SFSF monitoring cycle, at which point they would have completed both a desk review and a site visit for each state. In the meantime, to help other states learn from common issues found during SFSF monitoring reviews, Education provided technical assistance to all states via a webinar in February 2011. The webinar highlighted lessons learned during monitoring reviews, including best practices for cash management and separate tracking of funds. To meet our mandate to comment on recipient reports, we continued to monitor recipient-reported data, including data on jobs funded. For this report, we focused our review on the quality of data reported by SFSF; ESEA Title I, Part A; and IDEA Part B education grant recipients. Using education recipient data from the eighth reporting period, which ended June 30, 2011, we continued to check for errors or potential problems by repeating analyses and edit checks reported in previous reports. Education uses various methods to review the accuracy of recipient reported data to help ensure data quality. Specifically, Education compared data from the agency’s grant database and financial management system with recipient reported data. These systems contain internal data for every award made to states, including the award identification number, award date, award amount, outlays, and recipient names. Education program officials told us they verified expenditure data in states’ quarterly reports by comparing it to data in their internal grants management system. Education officials told us that state expenditures can vary from outlays depending on how the state reimburses its subrecipients, but Education officials review the figures to determine if they are reasonable. In addition, SFSF officials told us they cross-walked the recipient reported data with previous quarterly reports to check for reasonableness. For example, the officials told us they compared the number of subrecipients and vendors from quarter to quarter to see if they increased or stayed the same, as would be expected for a cumulative data point. Education officials stated they worked directly with states to correct any issues found during their checks of recipient reported data. Overall, Education officials agreed that they have made significant progress in ensuring data quality, as compared to the early quarters when they had to focus on helping states understand basic reporting requirements. At this point, the program officials told us they do not generally see significant data quality issues or mistakes when they review recipient reports. In August 2011, the Education OIG reported that they performed 49,150 data quality tests of recipient reported data for grant awards and found anomalies in 4 percent of the tests. The OIG reported that the Department’s processes to ensure the accuracy and completeness of recipient reported data were generally effective. In addition to Education’s efforts to ensure data quality, selected state officials we spoke with said they examined recipient reports of individual subrecipients. For example, Georgia officials told us they reviewed FTE data for reasonableness, compared revenues and expenditures, and ensured all vendors were included in vendor payment reports. The officials stated that they followed up on any questionable items with district staff. As we previously reported, calculating FTE data presented initial challenges for many LEAs, and states worked to ensure the accuracy of the data through a variety of checks and systems. For example, the Mississippi Department of Education helped LEAs calculate FTE data correctly by providing LEAs spreadsheets with ready-made formulas. New York officials told us they examined the calculation of FTEs funded and compared that data with payroll records. North Carolina officials told us that through their review of LEA data, they identified issues with FTE figures that were budgeted but not ultimately verified against actual figures. To improve the accuracy of the data, the state now compares LEA payroll records to their budgeted figures. Education and selected states told us they used recipient reports to obtain data on expenditures, FTEs, and other activities funded to enhance their oversight and management efforts. For example, Education’s special education program officials and most selected states used recipient reported data to track the amount of Recovery Act funds LEAs spent. In particular, Education officials that administer the IDEA, Part B grant told us they monitored LEA expenditures through recipient reports because it was the only information they had on how much subrecipients had spent. Education and several selected states also told us they examined recipient reports as part of their monitoring efforts. For example, SFSF program officials reviewed recipient reports, particularly expenditure data and the subrecipient award amount, to help choose subrecipients for monitoring. Officials from Arizona, the District of Columbia, and North Carolina told us they used recipient reported data to assess risk and inform their monitoring efforts. For example, the District of Columbia tracks spending rates to ensure subrecipients meet the deadline for using the funds. If a subrecipient has lower than expected spending rates, they are subject to increased monitoring. Arizona uses recipient reported data to verify that internal controls are working, for instance by examining expenditure rates to see whether there may be cash management issues. In addition, Iowa and New York officials said they used recipient reported data to ensure appropriate uses of funds. State and LEA officials we spoke with continued to report greater ease in collecting and reporting data for recipient reports. As we previously reported, recipients told us they have gained more experience reporting and the reporting process was becoming routine. For example, Arizona officials told us that their centralized reporting process now runs with very little effort or burden on state and local recipients of Recovery Act education funds. Alaska officials stated that the early quarters were challenging for reporting, but the state training sessions with LEAs helped establish a smooth process by the third quarter. At the local level, an LEA official in Iowa told us that while recipient reporting was confusing in the beginning, her district changed some internal procedures and automated some calculations to make the process more efficient. One measure of recipients’ understanding of the reporting process is the number of noncompliant recipients. There were no non-compliers in the eighth reporting period for recipients of SFSF, ESEA Title I, Part A or IDEA, Part B funds. Although the recipient reporting process has become smoother over time, some states and LEAs noted that there continues to be a burden associated with meeting reporting requirements, particularly due to limited resources. For example, California officials stated it had been burdensome to collect data from over 1,500 LEAs when there were significant budget cuts. Officials from the Massachusetts SEA stated that the most burdensome aspect of recipient reporting was the short time frame for collecting data from nearly 400 LEAs when local staff were already stretched thin. At the local level, officials at a rural Mississippi school district stated that gathering the supporting documents for their quarterly reports was cumbersome and took a significant amount of time. For example, in the previous quarter one staff member had to upload more than 70 supporting documents to the state’s centralized reporting system. Further, Education officials noted that the improvements in the process for recipient reporting have not eliminated the burden on LEAs. Moreover, according to Education officials, although the primary goal of the Recovery Act grants was not reporting, grantees were spending significant amounts of time complying with the reporting process when the Department already had some data elements, such as grant awards and drawdowns, from other sources. Two recent actions indicate that recipient reporting could be expanded to funds beyond those from the Recovery Act. A White House Executive Order dated June 13, 2011, established a Government Accountability and Transparency Board (Board) to provide strategic direction for enhancing the transparency of federal spending and advance efforts to detect and remediate fraud, waste, and abuse in federal programs, among other things. By December 2011, the Board is required to develop guidelines for integrating systems that support the collection and display of government spending data, ensuring the reliability of those data, and broadening the deployment of fraud detection technologies. In addition, one of the objectives of proposed legislation—the Digital Accountability and Transparency Act of 2011 (DATA Act)—is to enhance transparency by broadening the requirement for reporting to include recipients of non- Recovery Act funds. According to Recovery.gov, during the quarter beginning April 1, 2011, and ending June 30, 2011, the Recovery Act funded approximately 286,000 FTEs using funds under the programs in our review (see fig. 8). Further, for this eighth round of reporting, similar to what we observed in previous rounds, education FTEs for these programs accounted for about half of all FTEs reported for the quarter. Following OMB guidance, states reported on FTEs directly paid for with Recovery Act funding, not the employment impact on suppliers of materials (indirect jobs) or on the local communities (induced jobs). According to Education officials, FTE numbers were expected to decrease over time because fewer prime recipients would be reporting as they exhaust all of their Recovery Act funds. FTE data provide an overall indication of the extent to which the Recovery Act met one of its intended goals of saving and creating jobs in order to help economic recovery, although some limitations with these data may make it difficult to determine the impact the Recovery Act made in any one particular reporting period. In May 2010, GAO identified a number of issues that could lead to under- or over-reporting of FTEs. Our analysis of the data on Recovery.gov showed variations in the number of FTEs reported, which Education officials said could be explained by states’ broad flexibility in determining what they used Recovery Act SFSF funds on and when they allocated those funds. For example, Illinois reported less than 1 FTE in the second reporting round and over 40,000 in the third reporting round for the SFSF education stabilization funds. Education officials stated that rarely would the districts in one state hire 40,000 teachers in 1 quarter. Rather, Education officials said the state likely made a decision to allocate those funds in that quarter to teacher salaries. Similarly, from the fourth to fifth reporting rounds, the number of FTEs more than doubled in Arkansas and nearly doubled in Florida for the SFSF education stabilization funds. Education officials explained that any significant increase or decrease in FTEs likely reflects the state’s decision to allocate the funds in one quarter rather than during another quarter. They noted that some states used their funds consistently over time, whereas others used a large portion of the funds at the beginning or end of a school year. Therefore, sharp increases or decreases in the FTE data are not uncommon or unexpected. Delaware reported no FTEs for SFSF government services funds in the eighth reporting round. Education officials stated that Delaware decided to use those funds on operating costs, not salaries. Education officials told us that recipient reported FTE data were useful to them when assessing the impact of grants on jobs funded. Education does not have any comparable data on jobs funded. Therefore, FTE data provided them a measure of the extent to which the Recovery Act programs, particularly SFSF, accomplished that specific goal of funding jobs. According to Education officials, determining jobs funded was an important, but secondary impact of the Recovery Act funding for the ESEA Title I, Part A and IDEA, Part B grants. The purpose of ESEA Title I is to ensure that all children have a fair, equal, and significant opportunity to obtain a high-quality education by providing financial assistance to LEAs and schools with high numbers or percentages of poor children. The purpose of IDEA, Part B is to ensure that all students with disabilities have available to them a free appropriate public education that emphasizes special education and related services designed to meet their unique needs. According to Education officials, some of the services provided to students using the ESEA Title I, Part A and IDEA, Part B Recovery Act funds led to the creation of jobs while others served the needs of children but did not directly create jobs. Therefore, while FTE data did provide a useful indication of jobs funded for those programs under the Recovery Act, other measures such as student outcomes will be more useful after the Recovery Act ends when assessing the impact of programs with education-related goals. A key goal of Recovery Act funding was to create and retain jobs and, for SFSF, to advance education reforms, and our work has consistently shown that LEAs primarily used their funding to cover the cost of retaining jobs. Additionally, the transparency required by Recovery Act reporting allowed the public access to data on the number of jobs funded and the amount of funds spent, but as the deadline for obligating funds approaches, little is currently known nationally about the advancement of the four areas of educational reform. Education’s planned evaluation could make an important contribution to understanding any outcomes related to reform. This national evaluation could be especially important considering that officials in many of our selected states have not planned evaluations, and many LEAs reported that they are neither collecting nor planning to collect data to evaluate the effect of SFSF on education reform efforts. While Education will assess results through its own evaluation, it will not be fully completed for several years. In the shorter term, state reporting on the SFSF indicators and descriptors of reform is the mechanism through which Education and the public track the extent to which a state is making progress. As these final data become available at the end of this fiscal year, Education has plans for assessing state compliance and analyzing the results in order to present, where possible, information to policymakers and the public. Given the accountability and transparency required by the Recovery Act, we feel it is important for Education to follow through with its plans to hold states accountable for presenting performance information and in its efforts to assist the public and policymakers in understanding the reform progress made by states. In addition to evaluations and reporting, program accountability can be facilitated through monitoring and taking corrective action on audit findings. Because of the historic amount of Education funding included in the Recovery Act, effective oversight and internal controls are of fundamental importance in assuring the proper and effective use of federal funds to achieve program goals. Education’s new SFSF monitoring process took into account the one-time nature of these funds and was designed to make states aware of monitoring and audit findings to help them resolve any issues or make improvements to their program prior to Education publishing a final report. However, Education’s implementation of this process has varied, with some states waiting months to get feedback on monitoring results. When states do not receive timely feedback on monitoring findings, they may not have time to resolve these issues before they have obligated their SFSF funds. To ensure all states receive appropriate communication and technical assistance for SFSF, consistent with what some states received in response to SFSF monitoring reviews, we recommend that the Secretary of Education establish mechanisms to improve the consistency of communicating monitoring feedback to states, such as establishing internal time frames for conveying information found during monitoring. We provided a draft copy of this report to Education for review and comment. Education’s comments are reproduced in appendix III. Education agreed with our recommendation to improve the consistency of communicating SFSF monitoring feedback to states. Specifically, Education responded that their SFSF monitoring protocols should include procedures for effectively communicating the status of monitoring feedback to states. Additionally, Education officials reiterated that the new SFSF monitoring approach was designed as an iterative method to take into consideration the large amount of funding, the complexities of state budget situations, the need to expeditiously resolve monitoring issues due to the short time frames, and the large numbers and diverse types of grantees. Through this monitoring approach, Education officials noted that the department has completed reviews of all but one state and is currently planning the second cycle of monitoring. Education officials reported that the feedback provided to states through this new approach was ongoing and that not all states have required the same level of follow up discussions. GAO agrees that this approach is appropriate given the one-time nature of the SFSF program and, as we point out in our report, this approach has helped states to quickly address potential issues. Since the amount of contact between Education and the states can be numerous and involve multiple officials and agencies, we believe that any actions taken by the department to improve the consistency of communication with states will improve its monitoring process. Education also provided some additional and updated information about their monitoring efforts and we modified the report to reflect the data they provided. In addition, Education provided us with several technical comments that we incorporated, as appropriate. We are sending copies of this report to relevant congressional committees, the Secretary of Education, and other interested parties. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or scottg@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. To obtain national level information on how Recovery Act funds made available by the U.S. Department of Education (Education) under SFSF; ESEA Title I, Part A; and IDEA, Part B were used at the local level, we designed and administered a Web-based survey of local educational agencies (LEA) in the 50 states and the District of Columbia. We surveyed school district superintendents across the country to learn how Recovery Act funding was used and what impact these funds had on school districts. We selected a stratified random sample of 688 LEAs from the population of 15,994 LEAs included in our sample frame of data obtained from Education’s Common Core of Data (CCD) in 2008-09. We conducted our survey between March and May 2011, with a 78 percent final weighted response rate. We took steps to minimize nonsampling errors by pretesting the survey instrument with officials in three LEAs in January 2011 and February 2011. Because we surveyed a sample of LEAs, survey results are estimates of a population of LEAs and thus are subject to sampling errors that are associated with samples of this size and type. Our sample is only one of a large number of samples that we might have drawn. As each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval. All estimates produced from the sample and presented in this report are representative of the in-scope population and have margins of error of plus or minus 7 percentage points or less for our sample, unless otherwise noted. We excluded nine of the sampled LEAs because they were no longer operating in the 2010-11 school year or were not an LEA, and therefore were considered out of scope. This report does not contain all the results from the survey. The survey and a more complete tabulation of the results can be viewed at GAO-11-885SP. At the state and local level, we conducted site visits to four states (California, Iowa, Massachusetts, and Mississippi), and contacted an additional seven states (Alaska, Arizona, Georgia, Hawaii, North Carolina, New York, and Wyoming) and the District of Columbia to discuss how they were using, monitoring, and planning to evaluate the effect of their Recovery Act funds. In addition, we contacted officials from Florida, Kansas, and South Carolina for information regarding IDEA, Part B waivers. We selected these states in order to have an appropriate mix of recipients that varied across certain factors, such as drawdown rates, economic response to the recession, and data availability, with consideration of geography and recent federal monitoring coverage. During our site visits, we met with SFSF, ESEA Title I, and IDEA officials at the state level as well as LEAs and an Institution of Higher Education (IHE). For the additional seven states, we gathered information by phone or e-mail from state education program officials on fund uses, monitoring, and evaluation. We also met with program officials at Education to discuss ongoing monitoring and evaluation efforts for Recovery Act funds provided through SFSF, ESEA Title I, and IDEA. We also interviewed officials at Education and reviewed relevant federal laws, regulations, guidance, and communications to the states. Further, we obtained information from Education’s Web site about the amount of funds these states have drawn down from their accounts with Education. The recipient reporting section of this report responds to the Recovery Act’s mandate that we comment on the estimates of jobs created or retained by direct recipients of Recovery Act funds. For our review of the eighth submission of recipient reports covering the period from April 1, 2011, through June 30, 2011, we built on findings from our prior reviews of the reports. We performed edit checks and basic analyses on the eighth submission of recipient report data that became publicly available at Recovery.gov on July 30, 2011. To understand how the quality of jobs data reported by Recovery Act education grantees has changed over time, we compared the 8 quarters of recipient reporting data that were publicly available at Recovery.gov on July 30, 2011. In addition, we also reviewed documentation and interviewed federal agency officials from Education who have responsibility for ensuring a reasonable degree of quality across their programs’ recipient reports. Due to the limited number of recipients reviewed and the judgmental nature of the selection, the information we gathered about state reporting and oversight of FTEs is limited to those selected states in our review and not generalizable to other states. GAO’s findings based on analyses of FTE data are limited to those Recovery Act education programs and time periods examined and are not generalizable to any other programs’ FTE reporting. We compared, at the aggregate and state level, funding data reported directly by recipients on their quarterly reports against the recipient funding data maintained by Education. The cumulative funding data reported by the recipients aligned closely with the funding data maintained by the Department of Education. An Education Inspector General report included a similar analysis comparing agency data to recipient reported data from the first quarter of 2010. Although not directly comparable to our analysis, their assessment identified various discrepancies between agency and recipient reported data. We also noted some discrepancies across the education programs we reviewed where the state recipients’ reported expenditures were either greater or less than 10 percent of the respective outlays reported by Education. In general, however, we consider the recipient report data to be sufficiently reliable for the purpose of providing summary, descriptive information about FTEs or other information submitted on grantees’ recipient reports. To update the status of open recommendations from previous bimonthly and recipient reporting reviews, we obtained information from agency officials on actions taken in response to the recommendations. We conducted this performance audit from October 2010 to September 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In this appendix, we update the status of agencies’ efforts to implement the 16 recommendations that remain open and are not implemented, 8 newly implemented recommendations, and 1 newly closed recommendation from our previous bimonthly and recipient reporting reviews. Recommendations that were listed as implemented or closed in a prior report are not repeated here. Lastly, we address the status of our matters for congressional consideration. Given the concerns we have raised about whether program requirements were being met, we recommended in May 2010 that the Department of Energy (DOE), in conjunction with both state and local weatherization agencies, develop and clarify weatherization program guidance that: Clarifies the specific methodology for calculating the average cost per home weatherized to ensure that the maximum average cost limit is applied as intended. Accelerates current DOE efforts to develop national standards for weatherization training, certification, and accreditation, which is currently expected to take 2 years to complete. Sets time frames for development and implementation of state monitoring programs. Revisits the various methodologies used in determining the weatherization work that should be performed based on the consideration of cost-effectiveness and develops standard methodologies that ensure that priority is given to the most cost- effective weatherization work. To validate any methodologies created, this effort should include the development of standards for accurately measuring the long-term energy savings resulting from weatherization work conducted. In addition, given that state and local agencies have felt pressure to meet a large increase in production targets while effectively meeting program requirements and have experienced some confusion over production targets, funding obligations, and associated consequences for not meeting production and funding goals, we recommended that DOE clarify its production targets, funding deadlines, and associated consequences, while providing a balanced emphasis on the importance of meeting program requirements. DOE generally concurred with these recommendations and has made some progress in implementing them. For example, to clarify the methodology for calculating the average cost per home, DOE has developed draft guidance to help grantees develop consistency in their average cost per unit calculations. The guidance further clarifies the general cost categories that are included in the average cost per unit. DOE had anticipated issuing this guidance in June 2011, but as of late July 2011 this guidance has not yet been finalized. In response to our recommendation that it develop and clarify guidance that develops a best practice guide for key internal controls, DOE distributed a memorandum dated May 13, 2011, to grantees reminding them of their responsibilities to ensure compliance with internal controls and the consequences of failing to do so. DOE officials stated that they rely on existing federal, state, and local guidance; their role is to monitor states to ensure they enforce the rules. DOE officials felt that there were sufficient documents in place to require internal controls, such as the grant terms and conditions and a training module. Because all of the guidance is located in one place, the WAPTAC Web site, DOE officials commented that a best practice guide would be redundant. Therefore, DOE officials stated that they do not intend to fully implement GAO’s recommendation. To better ensure that Energy Efficiency and Conservation Block Grant (EECBG) funds are used to meet Recovery Act and program goals, we recommended that DOE explore a means to capture information on the monitoring processes of all recipients to make certain that recipients have effective monitoring practices. DOE generally concurred with this recommendation, stating that “implementing the report’s recommendations will help ensure that the Program continues to be well managed and executed.” DOE also provided additional information on changes it has implemented. DOE added additional questions to the on-site monitoring checklists related to subrecipient monitoring to help ensure that subrecipients are in compliance with the terms and conditions of the award. These changes will help improve DOE’s oversight of recipients, especially larger recipients, which are more likely to be visited by DOE project officers. However, not all recipients receive on-site visits. As noted previously, we believe that the program could be more effectively monitored if DOE captured information on the monitoring practices of all recipients. To better ensure that Energy EECBG funds are used to meet Recovery Act and program goals, we recommended that DOE solicit information from recipients regarding the methodology they used to calculate their energy-related impact metrics and verify that recipients who use DOE’s estimation tool use the most recent version when calculating these metrics. In our report, we concluded that DOE needed more information regarding the recipients’ estimating methods in order to assess the reasonableness of energy-related estimates and thus determine the extent to which the EECBG program is meeting Recovery Act and program goals for energy- related outcomes. DOE officials noted that they have made changes to the way they collect impact metrics in order to apply one unified methodology to the calculation of impact metrics. DOE issued guidance effective June 23, 2011, that eliminates the requirement for grant recipients to calculate and report estimated energy savings. DOE officials said the calculation of estimated impact metrics will now be performed centrally by DOE by applying known national standards to existing grantee-reported performance metrics. Based on DOE’s action, we concluded that DOE has addressed the intent of this recommendation. To help ensure that grantees report consistent enrollment figures, we recommended that the Director of the Department of Health and Human Services’ (HHS) Office of Head Start (OHS) should better communicate a consistent definition of “enrollment” to grantees for monthly and yearly reporting and begin verifying grantees’ definition of “enrollment” during triennial reviews. OHS issued informal guidance on its Web site clarifying monthly reporting requirements to make them more consistent with annual enrollment reporting. This guidance directs grantees to include in enrollment counts all children and pregnant mothers who are enrolled and have received a specified minimum of services (emphasis added). According to officials, OHS is considering further regulatory clarification. To oversee the extent to which grantees are meeting the program goal of providing services to children and families and to better track the initiation of services under the Recovery Act, we recommended that the Director of OHS should collect data on the extent to which children and pregnant women actually receive services from Head Start and Early Head Start grantees. OHS has reported that, in order to collect information on services provided to children and families, it plans to require grantees to report average daily attendance, beginning in the 2011-2012 school year. To provide grantees consistent information on how and when they will be expected to obligate and expend federal funds, we recommended that the Director of OHS should clearly communicate its policy to grantees for carrying over or extending the use of Recovery Act funds from one fiscal year into the next. Following our recommendation, HHS indicated that OHS would issue guidance to grantees on obligation and expenditure requirements, as well as improve efforts to effectively communicate the mechanisms in place for grantees to meet the requirements for obligation and expenditure of funds. HHS has subsequently reported that grantees have been reminded that the timely use of unobligated balances requires recipients to use the “first in/first out” principle for recognizing and recording obligations and expenditures of those funds. To better consider known risks in scoping and staffing required reviews of Recovery Act grantees, we recommended that the Director of OHS should direct OHS regional offices to consistently perform and document Risk Management Meetings and incorporate known risks, including financial management risks, into the process for staffing and conducting reviews. HHS reported OHS was reviewing the Risk Management process to ensure it is consistently performed and documented in its centralized data system and that it had taken related steps, such as requiring the grant officer to identify known or suspected risks prior to an on-site review. More recently, HHS has indicated that the results and action plans from the Risk Management Meetings are documented in the Head Start Enterprise System and used by reviewers to highlight areas where special attention is needed during monitoring reviews. HHS also notes that the Division of Payment Management (DPM) sends OHS monthly reports on grantees to assist OHS in performing ongoing oversight, monitoring grantee spending, and assessing associated risks and that it has incorporated a new fiscal information form as a pre-review requirement to ensure that fiscal information and concerns known to the regional office staff are shared with on-site reviewers. Because the absence of third-party investors reduces the amount of overall scrutiny Tax Credit Assistance Program (TCAP) projects would receive and the Department of Housing and Urban Development (HUD) is currently not aware of how many projects lacked third-party investors, we recommended that HUD should develop a risk-based plan for its role in overseeing TCAP projects that recognizes the level of oversight provided by others. HUD responded to our recommendation by saying it must wait for final reports from housing finance agencies on TCAP project financing sources in order to identify those projects that are in need of additional monitoring. When the final reports are received, HUD said it will develop a plan for monitoring those projects. HUD said it will begin identifying projects that may need additional monitoring at the end of September 2011 when sufficient information should be available to determine which projects have little Low-Income Housing Tax Credit investment and no other leveraged federal funds. To enhance the Department of Labor’s (Labor) ability to manage its Recovery Act and regular Workforce Investment Act (WIA) formula grants and to build on its efforts to improve the accuracy and consistency of financial reporting, we recommended that the Secretary of Labor take the following actions: To determine the extent and nature of reporting inconsistencies across the states and better target technical assistance, conduct a one-time assessment of financial reports that examines whether each state’s reported data on obligations meet Labor’s requirements. To enhance state accountability and to facilitate their progress in making reporting improvements, routinely review states’ reporting on obligations during regular state comprehensive reviews. Labor reported that it has taken actions to implement our recommendations. To determine the extent of reporting inconsistencies, Labor awarded a contract in September 2010 and completed the assessment of state financial reports in June 2011. Labor is currently analyzing the findings and expects to have a final report and recommendations in the fall of 2011. To enhance states’ accountability and facilitate their progress in making improvements in reporting, Labor issued guidance on federal financial management and reporting definitions on May 27, 2011, and conducted training on its financial reporting form and key financial reporting terms such as obligations and accruals. Labor also reported that it routinely monitors states’ reporting on obligations as part of its oversight process and comprehensive on-site reviews. Our September 2009 bimonthly report identified a need for additional federal guidance in defining green jobs and we made the following recommendation to the Secretary of Labor: To better support state and local efforts to provide youth with employment and training in green jobs, provide additional guidance about the nature of these jobs and the strategies that could be used to prepare youth for careers in green industries. Labor agreed with our recommendation and has taken several actions to implement it. Labor’s Bureau of Labor Statistics (BLS) has developed a definition of green jobs, which was finalized and published in the Federal Register on September 21, 2010. In addition, Labor continues to host a Green Jobs Community of Practice, an online virtual community available to all interested parties. The department also hosted a symposium on April 28 and 29, 2011, with the green jobs state Labor Market Information Improvement grantees. Symposium participants shared recent research findings, including efforts to measure green jobs, occupations, and training in their states. In addition, the department released a new career exploration tool called “mynextmove” (www.mynextmove.gov) in February 2011 that includes the Occupational Information Network (O*NET) green leaf symbol to highlight green occupations. Additional green references have recently been added and are noted in the latest update, The Greening of the World of Work: O*NET Project’s Book of References. Furthermore, Labor is planning to release a Training and Employment Notice this fall that will provide a summary of research and resources that have been completed by BLS and others on green jobs definitions, labor market information and tools, and the status of key Labor initiatives focused on green jobs. To leverage Single Audits as an effective oversight tool for Recovery Act programs, we recommended that the Director of the Office of Management and Budget (OMB) 1. take additional efforts to provide more timely reporting on internal controls for Recovery Act programs for 2010 and beyond; 2. evaluate options for providing relief related to audit requirements for low-risk programs to balance new audit responsibilities associated with the Recovery Act; 3. issue Single Audit guidance in a timely manner so that auditors can efficiently plan their audit work; 4. issue the OMB Circular No. A-133 Compliance Supplement no later than March 31 of each year; 5. explore alternatives to help ensure that federal awarding agencies provide their management decisions on the corrective action plans in a timely manner; and 6. shorten the time frames required for issuing management decisions by federal agencies to grant recipients. GAO’s recommendations to OMB are aimed toward improving the Single Audit’s effectiveness as an accountability mechanism for federally awarded grants from Recovery Act funding. We previously reported that OMB has taken a number of actions to implement our recommendations since our first Recovery Act report in April 2009. We also reported that OMB had undertaken initiatives to examine opportunities for improving key areas of the single audit process over federal grant funds administered by state and local governments and nonprofit organizations based upon the directives in Executive Order 13520, Reducing Improper Payments and Eliminating Waste in Federal Programs issued in November 2009. Two sections of the executive order related to federal grantees, including state and local governments, and required OMB to establish working groups to make recommendations to improve (1) the effectiveness of single audits of state and local governments and non- profit organizations that are expending federal funds and (2) the incentives and accountability of state and local governments for reducing improper payments. OMB formed several working groups as a result of the executive order, including two separate working groups on issues related to single audits. These two working groups developed recommendations and reported them to OMB in May and June of 2010. OMB formed a “supergroup” to review these recommendations for improving single audits and to provide a plan for OMB to further consider or implement them. The “supergroup” finalized its report in August 2011. OMB also formed a Single Audit Metrics Workgroup as a result of one of the recommendations made in June 2010 to improve the effectiveness of single audits. In addition, the President issued a memorandum entitled “Administrative Flexibility, Lower Costs, and Better Results for State, Local, and Tribal Governments” (M-11-21) in February 2011 that directed OMB to, among other things, lead an interagency workgroup to review OMB circular policies to enable state and local recipients to most effectively use resources to improve performance and efficiency. Agencies reported their actions and recommendations to OMB on August 29, 2011. Among the recommendations included in the report were recommendations aimed toward improving single audits. Since most Recovery Act funds will be expended by 2013, some of the recommendations that OMB acts upon may not be implemented in time to affect single audits of grant programs funded under the Recovery Act. However, OMB’s efforts to enhance single audits could, if properly implemented, significantly improve the effectiveness of the single audit as an accountability mechanism. OMB officials stated that they plan to review the “supergroup’s” August 2011 report and develop a course of action for enhancing the single audit process, but have not yet developed a time frame for doing so. We will continue to monitor OMB’s efforts in this area. (1) To address our recommendation to encourage timelier reporting on internal controls for Recovery Act programs for 2010 and beyond, we previously reported that OMB had commenced a second voluntary Single Audit Internal Control Project (project) in August 2010 for states that received Recovery Act funds in fiscal year 2010. The project has been completed and the results have been compiled as of July 6, 2011. One of the goals of these projects was to achieve more timely communication of internal control deficiencies for higher-risk Recovery Act programs so that corrective action could be taken more quickly. The project encouraged participating auditors to identify and communicate deficiencies in internal control to program management 3 months sooner than the 9-month time frame required under statute. The projects also required that program management provide a corrective action plan aimed at correcting any deficiencies 2 months earlier than required under statute to the federal awarding agency. Upon receiving the corrective action plan, the federal awarding agency had 90 days to provide a written decision to the cognizant federal agency for audit detailing any concerns it may have with the plan. Fourteen states volunteered to participate in OMB’s second project, submitted interim internal control reports by December 31, 2010, and developed auditee corrective action plans on audit findings by January 31, 2011. However, although the federal awarding agencies were to have provided their interim management decisions to the cognizant agency for audit by April 30, 2011, only 2 of the 11 federal awarding agencies had completed the submission of all of their management decisions, according to an official from the Department of Health and Human Services, the cognizant agency for audit. In our review of the 2009 project, we had noted similar concerns that federal awarding agencies’ management decisions on proposed corrective actions were untimely, and our related recommendations are discussed later in this report. Overall, we found that the results for both projects were helpful in communicating internal control deficiencies earlier than required under statute. The projects’ dependence on voluntary participation, however, limited their scope and coverage. This voluntary participation may also bias the projects’ results by excluding from analysis states or auditors with practices that cannot accommodate the project’s requirement for early reporting of internal control deficiencies. Even though the projects’ coverage could have been more comprehensive, the results provided meaningful information to OMB for better oversight of Recovery Act programs and for making future improvements to the single audit process. In August 2011, OMB initiated a third Single Audit Internal Control Project with similar requirements as the second OMB Single Audit Internal Control Project. The goal of this project is also to identify material weaknesses and significant deficiencies for selected Recovery Act programs 3 months sooner than the 9-month time frame currently required under statute so that the findings could be addressed by the auditee in a timely manner. This project also seeks to provide some audit relief for the auditors that participate in the project as risk assessments for certain programs are not required. We will continue to monitor the status of OMB’s efforts to implement this recommendation and believe that OMB needs to continue taking steps to encourage timelier reporting on internal controls through Single Audits for Recovery Act programs. (2) We previously recommended that OMB evaluate options for providing relief related to audit requirements for low-risk programs to balance new audit responsibilities associated with the Recovery Act. OMB officials have stated that they are aware of the increase in workload for state auditors who perform Single Audits due to the additional funding to Recovery Act programs subject to audit requirements. OMB officials also stated that they solicited suggestions from state auditors to gain further insights to develop measures for providing audit relief. For state auditors that participated in the second and third OMB Single Audit Internal Control Projects, OMB provided some audit relief by modifying the requirements under Circular No. A-133 to reduce the number of low-risk programs to be included in some project participants’ risk assessment requirements. However, OMB has not yet put in place a viable alternative that would provide relief to all state auditors that conduct Single Audits. (3) (4) With regard to issuing Single Audit guidance, such as the OMB Circular No. A-133 Compliance Supplement, in a timely manner, OMB has not yet achieved timeliness in its issuance of Single Audit guidance. We previously reported that OMB officials intended to issue the 2011 Compliance Supplement by March 31, 2011, but instead issued it in June. OMB officials stated that the delay of this important guidance to auditors was due to the refocusing of its efforts to avert a governmentwide shutdown. OMB officials stated that although they had prepared to issue the 2011 Compliance Supplement by the end of March by taking steps such as starting the process earlier in 2010 and giving agencies strict deadlines for program submissions, they were not able to issue it until June 1, 2011. OMB officials developed a timeline for issuing the 2012 Compliance Supplement by March 31, 2012. In August 2011, they began the process of working with the federal agencies and others involved in issuing the Compliance Supplement. We will continue to monitor OMB’s efforts in this area. (5) (6) Regarding the need for agencies to provide timely management decisions, OMB officials identified alternatives for helping to ensure that federal awarding agencies provided their management decisions on the corrective action plans in a timely manner, including possibly shortening the time frames required for federal agencies to provide their management decisions to grant recipients. OMB officials acknowledged that this issue continues to be a challenge. They told us they met individually with several federal awarding agencies that were late in providing their management decisions in the 2009 project to discuss the measures that the agencies could take to improve the timeliness of their management decisions. However, as mentioned earlier in this report, most of the federal awarding agencies had not submitted all of their management decisions on the corrective actions by the April 30, 2011, due date in the second project (and still had not done so by July 6, 2011, when the results of the completed project were compiled). OMB officials have yet to decide on the course of action that they will pursue to implement this recommendation. OMB formed a Single Audit Metrics Workgroup to develop an implementation strategy for developing a baseline, metrics, and targets to track the effectiveness of single audits over time and increase the effectiveness and timeliness of federal awarding agencies’ actions to resolve single audit findings. This workgroup reported its recommendations to OMB on June 21, 2011, proposing metrics that could be applied at the agency level, by program, to allow for analysis of single audit findings. OMB officials stated that they plan to initiate a pilot to implement the recommendations of this workgroup starting with fiscal year 2011 single audit reports. We recommended that the Director of OMB provide more direct focus on Recovery Act programs through the Single Audit to help ensure that smaller programs with higher risk have audit coverage in the area of internal controls and compliance; Based on OMB’s actions, we have concluded that OMB has addressed the intent of this recommendation. To provide direct focus on Recovery Act programs through the Single Audit to help ensure that smaller programs with higher risk have audit coverage in the area of internal controls and compliance, the OMB Circular No. A-133, Audits of States, Local Governments, and Non-Profit Organizations Compliance Supplement (Compliance Supplement) for fiscal years 2009 through 2011 required all federal programs with expenditures of Recovery Act awards to be considered as programs with higher risk when performing standard risk-based tests for selecting programs to be audited. The auditors’ determinations of the programs to be audited are based upon their evaluation of the risks of noncompliance occurring that could be material to an individual major program. The Compliance Supplement has been the primary mechanism that OMB has used to provide Recovery Act requirements and guidance to auditors. One presumption underlying the guidance is that smaller programs with Recovery Act expenditures could be audited as major programs when using a risk-based audit approach. The most significant risks are associated with newer programs that may not yet have the internal controls and accounting systems in place to help ensure that Recovery Act funds are distributed and used in accordance with program regulations and objectives. Since Recovery Act spending is projected to continue through 2016, we believe that it is essential that OMB provide direction in Single Audit guidance to help to ensure that smaller programs with higher risk are not automatically excluded from receiving audit coverage based on their size and standard Single Audit Act requirements. We spoke with OMB officials and reemphasized our concern that future Single Audit guidance provide instruction that helps to ensure that smaller programs with higher risk have audit coverage in the area of internal controls and compliance. OMB officials agreed and stated that such guidance will continue to be included in future Recovery Act guidance. We also performed an analysis of Recovery Act program selection for single audits of 10 states for fiscal year 2010. In general, we found that the auditors selected a relatively greater number of smaller programs with higher risks with Recovery Act funding when compared to the previous period. Therefore, this appears to have resulted in a relative increase in the number of smaller Recovery Act programs being selected for audit for 7 of the 10 states we reviewed. To ensure that Congress and the public have accurate information on the extent to which the goals of the Recovery Act are being met, we recommended that the Secretary of Transportation direct the Department of Transportations’ (DOT) Federal Highway Administration (FHWA) to take the following two actions: Develop additional rules and data checks in the Recovery Act Data System, so that these data will accurately identify contract milestones such as award dates and amounts, and provide guidance to states to revise existing contract data. Make publicly available—within 60 days after the September 30, 2010, obligation deadline—an accurate accounting and analysis of the extent to which states directed funds to economically distressed areas, including corrections to the data initially provided to Congress in December 2009. In its response, DOT stated that it implemented measures to further improve data quality in the Recovery Act Data System, including additional data quality checks, as well as providing states with additional training and guidance to improve the quality of data entered into the system. DOT also stated that as part of its efforts to respond to our draft September 2010 report in which we made this recommendation on economically distressed areas, it completed a comprehensive review of projects in these areas, which it provided to GAO for that report. DOT recently posted an accounting of the extent to which states directed Recovery Act transportation funds to projects located in economically distressed areas on its Web site, and we are in the process of assessing these data. To better understand the impact of Recovery Act investments in transportation, we believe that the Secretary of Transportation should ensure that the results of these projects are assessed and a determination made about whether these investments produced long- term benefits. Specifically, in the near term, we recommended that the Secretary direct FHWA and FTA to determine the types of data and performance measures they would need to assess the impact of the Recovery Act and the specific authority they may need to collect data and report on these measures. In its response, DOT noted that it expected to be able to report on Recovery Act outputs, such as the miles of road paved, bridges repaired, and transit vehicles purchased, but not on outcomes, such as reductions in travel time, nor did it commit to assessing whether transportation investments produced long-term benefits. DOT further explained that limitations in its data systems, coupled with the magnitude of Recovery Act funds relative to overall annual federal investment in transportation, would make assessing the benefits of Recovery Act funds difficult. DOT indicated that, with these limitations in mind, it is examining its existing data availability and, as necessary, would seek additional data collection authority from Congress if it became apparent that such authority was needed. DOT plans to take some steps to assess its data needs, but it has not committed to assessing the long-term benefits of Recovery Act investments in transportation infrastructure. We are therefore keeping our recommendation on this matter open. To the extent that appropriate adjustments to the Single Audit process are not accomplished under the current Single Audit structure, Congress should consider amending the Single Audit Act or enacting new legislation that provides for more timely internal control reporting, as well as audit coverage for smaller Recovery Act programs with high risk. We continue to believe that Congress should consider changes related to the Single Audit process. To the extent that additional coverage is needed to achieve accountability over Recovery Act programs, Congress should consider mechanisms to provide additional resources to support those charged with carrying out the Single Audit Act and related audits. We continue to believe that Congress should consider changes related to the Single Audit process. To provide housing finance agencies (HFA) with greater tools for enforcing program compliance, in the event the Section 1602 Program is extended for another year, Congress may want to consider directing the Department of the Treasury to permit HFAs the flexibility to disburse Section 1602 Program funds as interest-bearing loans that allow for repayment. We have closed this Matter for Congressional Consideration because the Section 1602 Program has not been extended. | The American Recovery and Reinvestment Act of 2009 (Recovery Act) provided $70.3 billion for three education programs--the State Fiscal Stabilization Fund (SFSF); Title I, Part A of the Elementary and Secondary Education Act (Title I); and Individuals with Disabilities Education Act (IDEA), Part B. One goal of the Recovery Act was to save and create jobs, and SFSF also requires states to report information expected to increase transparency and advance educational reform. This report responds to two ongoing GAO mandates under the Recovery Act. It examines (1) how selected states and local recipients used the funds; (2) what plans the Department of Education (Education) and selected states have to assess the impact of the funds; (3) what approaches are being used to ensure accountability of the funds; and (4) how Education and states ensure the accuracy of recipient reported data. To conduct this review, GAO gathered information from 14 states and the District of Columbia, conducted a nationally representative survey of local educational agencies (LEA), interviewed Education officials, examined recipient reports, and reviewed relevant policy documents. As of September 9, 2011, in the 50 states and the District of Columbia, about 4 percent of the obligated Recovery Act funds remain available for expenditure. Teacher retention was the primary use of Recovery Act education funds according to GAO's nationally representative survey of LEAs. The funds also allowed recipients to restore state budget shortfalls and maintain or increase services. However, the expiration of funds and state budget decreases may cause LEAs to decrease services, such as laying off teachers. We also found that nearly a quarter of LEAs reported lowering their local spending on special education, as allowed for under IDEA provisions that provide eligible LEAs the flexibility to reduce local spending on students with disabilities by up to half of the amount of any increase in federal IDEA funding from the prior year. However, even with this flexibility, many LEAs reported having difficulty maintaining required levels of local special education spending. In addition, two states have not been able to meet required state spending levels for IDEA or obtain a federal waiver from these requirements. States whose waivers were denied and cannot make up the shortfall in the fiscal year in question face a reduction in their IDEA funding equal to the shortfall, which may be long-lasting. Education plans to conduct two types of systematic program assessments to gauge the results of Recovery Act-funded programs that focus on educational reform: program evaluation and performance measurement. In the coming years, Education plans to produce an evaluation that will provide an in-depth examination of various Recovery Act programs' performance in addressing educational reform. In addition, for the SFSF program, Education plans to measure states' ability to collect and publicly report data on preestablished indicators and descriptors of educational reform, and it plans to provide a national view of states' progress. Education intends for this reporting to be a means for improving accountability to the public in the shorter term. Further, Education officials plan to use states' progress to determine whether a state is qualified to receive funds under other future reform-oriented grant competitions. Numerous entities help ensure accountability of Recovery Act funds through monitoring, audits, and other means, which have helped identify areas for improvement. Given the short time frame for spending these funds, Education's new SFSF monitoring approach prioritized helping states resolve monitoring issues and allowed Education to target technical assistance to some states. However, some states did not receive monitoring feedback promptly and this feedback was not communicated consistently because Education's monitoring protocol lacked internal time frames for following up with states. Education and state officials reported using a variety of methods to ensure recipient reported data are accurate. They also use recipient reported data to enhance their oversight and monitoring efforts. According to Recovery.gov, the Recovery Act funded approximately 286,000 full-time equivalents (FTE) during the eighth round of reporting, which ended June 30, 2011, for the education programs GAO reviewed. Despite the limitations associated with FTE data, Education found these data to be useful in assessing the impact of grant programs on saving and creating jobs. GAO recommends that the Secretary of Education establish mechanisms to improve the consistency of communicating SFSF monitoring feedback to states. Education agreed with our recommendation. |
The basic challenge of inventory management is having the proper amount of items on hand when required—neither too much nor too little. If inventory levels are too low, DOD and its components may experience supply shortages and be unable to satisfy customer demands. This could result in DOD undertaking costly and often wasteful efforts to recover from being out of stock. If inventory levels are too high, money is invested on items that may never be used. Additionally, a series of unnecessary expenditures is incurred for more warehouses, transportation, and personnel; storage and distribution facilities become more crowded; maintenance workloads may increase; and inventory excesses are generated which eventually may have to be disposed of, perhaps at a severe financial loss. Inventory levels are influenced by the amount of time between the initiation of a procurement action and the receipt of the item into the supply system. This time frame is known as acquisition lead time, and it consists of two parts: administrative and production lead times. Administrative lead time is the time interval from the initiation of a procurement action to the contract award, while the production lead time is the interval from the contract award to delivery of the items. Since acquisition lead times are the components’ estimates as to when an item will arrive, varying from that expectation results in consequences when items arrive too early or too late. To promote accuracy and completeness in the management of acquisition lead times, having appropriate policies, procedures, and instructions is an important component of an agency’s internal control framework. As discussed in GAO’s Internal Control Standards guidance, we identified that other important activities related to information processing systems, performance measures and indicators, and the recording and classification of transactions and events are also necessary. Inventory management and oversight is the shared responsibility between the USD (AT&L) and the military components. USD (AT&L) has overall responsibility for the development of acquisition policies for monitoring the overall effectiveness and efficiency of the DOD acquisition system. The components are responsible for implementing the materiel management policies and activities. The DOD Supply Chain Materiel Management Regulation states that the military components should aggressively pursue the lowest possible acquisition lead times, and in coming up with lead time estimates, they may use contractor information, historical information from representative procurements, technical documentation, or the best judgment of acquisition personnel. It also establishes for the military components overarching guiding principles, assigns responsibility, defines, and provides guidelines for developing acquisition lead time, and states that they should identify and track deviations from normal historical or projected patterns in such areas as demand, stock levels, and lead times. The military components have an inventory management agency that purchases and delivers items and services to the warfighter. The primary inventory agencies that provide this support to the warfighter are (1) the U.S. Air Force Materiel Command, (2) the U.S. Army Materiel Command, (3) the Defense Logistics Agency, and (4) the Naval Inventory Control Point. Table 1 shows the primary logistics agencies and their inventory management centers. To implement DOD’s acquisition lead time policy, each of the military components developed their own procedures for managing acquisition lead times, and as such, each used slightly different methodologies to calculate their estimated administrative lead time and production lead time values. The military components’ acquisition lead time estimates to acquire spare and repair parts varied considerably from the actual lead times experienced. More specifically, estimated lead times for all of the components rarely approximated actual lead times, with only 5 percent of the deliveries we reviewed having actual acquisition lead times that were within 1 week of the estimated lead time. While each of the military components had instances of both underestimated and overestimated lead times, the Army’s acquisition lead time estimates were generally understated, while DLA’s estimates were generally overstated. The Air Force’s and the Navy’s estimates were both overstated and understated. For the more than one million spare part deliveries we reviewed, the military components’ estimated acquisition lead times rarely approximated the actual lead times and were generally either understated or overstated. DOD’s Supply Chain Materiel Management Regulation provides guidance for developing materiel requirements based on customer expectations while minimizing the investment in inventories. In addition, accurate lead time estimates are critically important in enabling the military components to have the proper amount of inventory on hand. However, as table 2 shows, 5 percent of the deliveries, totaling about $700 million, had actual acquisition lead times that were within a week of the estimate. The combined value of the lead time underestimates for all the components resulted in slightly over $12 billion in spare and repair parts arriving more than 90 days later than expected, which may have negatively affected equipment readiness and overall rates because units may not have had the necessary inventory to support and sustain ongoing military operations. If lead time estimates had been more accurate, orders could have been placed and funds obligated earlier, and in some instances readiness rates could potentially have been improved. Further, the combined value of the lead time overestimates resulted in the military components obligating almost $2 billion more than 90 days earlier than necessary, which could add to excess on-hand inventories, although spare parts that come in early could potentially improve readiness. We reviewed the two parts of acquisition lead time, administrative lead time and production lead time, and found that each of the military components more accurately estimated the administrative portion than the production portion. However, for administrative lead time, the military components’ estimates fell within the 1-week range only about 20 percent of the time while production lead time estimates matched the actual production lead times within the 1-week range just over 10 percent of the time. Officials explained that the accuracy of their administrative lead time estimates was better than their production lead time estimates because they have more management control over their internal processes than over external contractor practices. Officials stated that variability always exists when generating lead time estimates, but they agreed that improved and more reliable lead time estimates can contribute to lower levels of inventory. They also stated that understated lead time estimates can result in backorders or part shortages which may impact a unit’s readiness if the needed spare parts are not available when expected, and overstated estimates result in prematurely obligating funds that could have been used for other military needs and can unnecessarily increase inventory levels and associated costs. The Army tended to underestimate their acquisition lead times and receive items later than expected. Of the 9,380 Army deliveries we reviewed, more than 58 percent of their actual acquisition lead times were more than 90 days longer than their estimated lead times. This represented about $10.6 billion worth of inventory arriving later than expected. Additionally, almost 12 percent had actual acquisition lead times that were more than 90 days shorter than their estimated lead times and that resulted in about $900 million of premature obligations, as shown in table 3. The variances between the Army’s actual and estimated lead times occurred, in part, because of miscoding of late deliveries as not representative of future delivery times, lack of accurate lead time data in one of its computer systems, and data input errors. Of the data we examined, most of the underestimates occurred within the Army Aviation and Missile Life Cycle Management Command within the Army Materiel Command. This command develops, acquires, fields, and sustains aviation, missile, and unmanned vehicle systems. When this command cannot obtain items, such as landing gear, helicopter blades, and aircraft access doors in accordance with expectations, it can have immediate and serious ramifications on the operational readiness of many units. We found production lead times in 3,863 orders, for items valued at $10.3 billion of the $10.6 billion we analyzed, where the actual lead times were more than 90 days later than the estimated lead times. According to our analyses of the command’s deliveries received in fiscal year 2005, nearly 63 percent arrived more than 90 days later than expected. Army officials stated that some of the variances between actual and estimated lead times occurred because some actual lead times were miscoded as nonrepresentative by the command’s acquisition personnel, who initially believed that certain delivery delays would be short-lived and were not representative of future deliveries. Once Army officials realized the delays were not short-lived, they said that item managers made some adjustments for particular affected items. Army guidance states that lead times should be computed using the most recent representative procurement. However, it does not give clear guidance on when to decide if continuing late contractor deliveries should be considered representative, and any adjustments made to particular affected items would not prevent similar situations from occurring in the future. As a result, actual lead times can be miscoded and excluded from lead time updates, which makes subsequent estimates inaccurate. Army officials acknowledged that this command has experienced a problem in meeting supply demands for several years, especially after Operation Iraqi Freedom began, because of the surge in demand for their items. The high demand depleted much of the Army’s on-hand supply of inventory more quickly than anticipated and replacing the items was difficult since many aviation-related items had long lead times for replacement. At the same time, the Army was unable to order some items as quickly as needed because it lacked sufficient available funds to obligate and process orders. However, Army officials stated that many manufacturers were operating at their highest capacity and placing orders more quickly would not have resulted in the companies actually producing the additional items any faster. Officials from the U.S. Army TACOM Life Cycle Management Command in Warren, Michigan made similar statements to explain the lateness of some of their deliveries. They agreed that they had experienced delays in getting items from certain contractors due to the high level of demand. They also acknowledged budgetary constraints during the years of our sample that resulted in hiring freezes and other personnel challenges that added to their workload and hindered their ability to process contracts and orders and to periodically review, validate, and make corrections to any inaccurately recorded lead time estimates. Army officials also attributed inaccuracies in lead times to input errors that item managers were unable to detect and correct. At the Army’s Communications-Electronics Life Cycle Management Command, lead time data are not automatically maintained or updated in the Logistics Modernization Program, which was designed to improve Army maintenance logistical and financial operations, and officials had to manually input the data from the command’s older computer system. However, according to Army officials, the heavy workloads of item managers have not allowed them to validate these data to detect and correct any lead time data input errors. Absent actions by the Army, across each of its Life Cycle Management Commands, to determine when deliveries are representative and should be used to update lead time values, maintain and update lead time data in its new computer system, and validate data input to detect and correct errors, late deliveries and parts shortages will likely continue. DLA tended to overestimate its acquisition lead times and receive items sooner than expected. Of the 1,031,779 DLA deliveries we reviewed, almost 40 percent had actual acquisition lead times that were more than 90 days shorter than their estimated lead times. This resulted in about $568 million being obligated earlier than necessary and inventory arriving earlier than expected. Conversely, only about 3 percent of DLA’s deliveries had actual acquisition lead times that were more than 90 days longer than their estimated lead times, totaling approximately $319 million, as shown in table 4. DLA manages almost every consumable item the military services need to operate, and according to officials, many of these items have been placed on long-term contracts, thus allowing faster order processing. Since the deliveries from the contractors were also faster, there have been reduced overall acquisition lead times. Even though DLA uses a methodology for computing and maintaining lead time estimates that is more heavily weighted toward the recent actual lead times than the existing ones on file, the process did not compute revised estimates that accurately reflected the rapid improvements being made through their lead time initiatives. Additionally, DLA officials stated that they emphasized business practices that encouraged earlier deliveries as opposed to later ones. They went on to state that the storage and handling costs were minimal, although we were unable to confirm this statement, and being able to meet customers’ needs by having the necessary items on hand was most important to them. With the emphasis on meeting or beating the estimated lead times, there is reduced incentive for DLA to adjust its lead times to more precisely reflect actual lead times experienced. Absent actions by DLA to review and revise the methodology and inputs it uses in calculating lead time estimates so that the estimates more precisely reflect its actual experiences, DLA will continue to obligate funds earlier than necessary and have early delivery of items. The Air Force tended to both underestimate and overestimate its acquisition lead times, receiving a significant amount of items both sooner and later than expected. Of the 18,335 Air Force deliveries we reviewed, more than 42 percent had actual acquisition lead times that were more than 90 days longer than estimated. This resulted in about $528 million worth of inventory that arrived later than estimated. At the same time, about 24 percent had actual acquisition lead times that were more than 90 days shorter than estimated, which resulted in about $272 million of premature obligations, as shown in table 5. A sample of 30 Air Force deliveries selected from the ones with the greatest variances between actual and estimated lead times provided an explanation for some of these variances. In over half of the sampled late deliveries, the item managers at the air logistics centers had used their standard default lead time values for the estimates. It is the Air Force’s standard procedure to use the standard default administrative lead time value for spare parts that have not been bought in more than 5 years, but Air Force guidance does not direct the use of default production lead times for spare parts that have not been purchased for more than 5 years. However, many items we reviewed used the standard default production lead time value because, according to officials, it was an easy estimate for item managers to use given their workload. In these cases, the default values greatly understated the actual lead times and resulted in later arrivals of deliveries to the air logistics centers, which may have negatively impacted their operational units’ mission readiness if those items had not been available when needed. Officials said that these default values may not be the best information available, and there might be other information obtained or generated for use in place of the default values. One possibility might be contacting the supplier to determine the current lead time. They noted that the use of these default values could also be an explanation for the overstated lead times as well as the understated lead times. Absent actions by the Air Force to review and validate its default lead time estimates and consider other options for better lead time data, mostly for infrequent buys, parts shortages or early obligation of funds will likely continue. The Navy tended to both underestimate and overestimate its acquisition lead times, receiving a significant amount of items both sooner and later than expected. Of the 19,304 Navy deliveries we reviewed, just over 39 percent had actual acquisition lead times that were more than 90 days shorter than estimated. As a result, about $165 million worth of inventory arrived earlier than expected and the funds for this inventory were obligated prematurely. In addition, about 28 percent had actual lead times that exceeded their estimates by more than 90 days, which resulted in almost $561 million of items arriving later than anticipated, as shown in table 6. Navy officials stated that they believe these variances are acceptable and reasonable due to the variability in generating lead times, especially for ship parts that are bought infrequently. They said that updating the lead time estimates more often would not make the forecasts more accurate because there are not enough observations per item to update more often. We did not evaluate whether more frequent updating of the lead time estimates would improve their accuracy. However, some of the variances between the Navy’s actual and estimated lead times occurred because of data input errors. We found input errors in a sample of 30 Navy deliveries selected from the ones with the greatest variances between the estimated and actual lead times that affected the estimates’ accuracy. For example, in two cases, the lead time estimates were incorrectly loaded into the ordering system used by the inventory control points at 10 times longer than what the correct estimates should have been, and the error was not detected. Also, many of the excessive estimated lead times of the sample items we reviewed could not be explained by Navy officials, who stated there were conflicting lead time data within their records. Until the Navy addresses these concerns by reviewing and validating its lead time data and correcting errors, either parts shortages or early obligation of funds are likely to continue. USD (AT&L) and the military components’ management actions and initiatives to reduce lead times from 2002 to 2005 were less effective overall than previous initiatives from 1994 to 2002. Progress in reducing lead times varied greatly by service from 2002 to 2005, with DLA and the Air Force reducing their lead times by about 3.3 and 4.1 percent annually respectively, while the Navy’s lead times remained the same, and the Army experienced an increase in lead times by 0.3 percent annually. Of the various management actions and initiatives taken by the services from 2002 to 2005, some were new and some were continuations of previous initiatives, with each service pursuing varying combinations of initiatives. For example, initiatives to streamline administrative processes were implemented by all military components from 1994 to 2002 and from 2002 to 2005, with DLA and the Air Force more aggressively implementing new initiatives from 2002 to 2005 than did the Army and Navy. In addition, from 1994 to 2002, enhanced USD (AT&L) oversight contributed to the rapid pace of lead time reduction; however, from 2002 to 2005, USD (AT&L) no longer continued to monitor progress made by the components in reducing lead times, and all components experienced reduced management oversight. Moreover, while new initiatives to improve contracting practices were implemented by all military components from 1994 to 2002 and were continued by all components from 2002 to 2005, from 2002 to 2005 DLA and the Air Force began new initiatives to strategically manage relationships with suppliers, while the Army and Navy did not. The military components could have decreased inventory requirements and saved money if more aggressive lead time reductions had been realized from 2002 to 2005 as they had been from 1994 to 2002. USD (AT&L) and the components’ management actions and initiatives to reduce lead times from 2002 to 2005 resulted in a slower rate of reduction in DOD-wide lead times of an average of 0.9 percent annually as compared to an average reduction of 5.6 percent annually from 1994 to 2002. The DOD Supply Chain Materiel Management Regulation gives general guidance stating that the military components should aggressively pursue the lowest possible acquisition lead times. As shown in table 7, progress in reducing lead times varied by military component from 2002 to 2005. The Army experienced an average annual lead time increase of 0.3 percent per year from 2002 to 2005, as compared to an average yearly reduction of 9.7 percent from 1994 to 2002, in part due to higher demands and supplier capacity issues. The Navy’s lead times were unchanged from 2002 to 2005, after decreasing by 2.8 percent from 1994 to 2002. The Air Force reduced its lead times from 2002 to 2005, but at a lower rate than it did from 1994 to 2002. The Air Force reduced its acquisition lead times by an average of 4.1 percent per year from 2002 to 2005, compared to an average yearly reduction of 4.5 percent from 1994 to 2002. Similarly, DLA’s acquisition lead times also decreased at a lower rate from 2002 to 2005 than from 1994 to 2002, being reduced by an average of 3.3 percent per year in the former as compared to 6.2 percent per year in the latter. Each of the military components pursued various initiatives to reduce acquisition lead times during both the 1994-2002 and 2002-2005 time periods with varying results. The progress of the military components in reducing lead times varied because each pursued different combinations of new and continued initiatives and management actions. These initiatives and actions generally fell into three areas of focus: streamlining internal administrative processes, improving oversight, and developing relationships with suppliers, as shown in table 8. DLA began a number of new initiatives and took several management actions from 2002 to 2005 that have helped it reduce lead times, and it also continued several initiatives that it had instituted from 1994 to 2002. This combination of continued and new initiatives enabled DLA to reduce its average lead time to 159 days. The Air Force also began a number of new initiatives and took several management actions to reduce lead times from 2002 to 2005, while continuing several initiatives that it had instituted from 1994 to 2002. This combination of continued and new initiatives enabled the Air Force to reduce its average lead time from 430 to 379 days from 2002 to 2005. Conversely, although individual Army components began some new initiatives to reduce lead times, the Army began no new componentwide initiatives to reduce lead times from 2002 to 2005. Furthermore, the Army has placed less effort in continuing new initiatives, which, combined with higher demands and supplier capacity issues, has resulted in the Army’s average lead time increasing from 305 to 308 days from 2002 to 2005. Likewise, the Navy also did not begin any new componentwide initiatives to reduce lead times from 2002 to 2005, resulting in lead times holding steady at 416 days from 2002 to 2005. Initiatives to streamline administrative processes were implemented or continued by all military components from 1994 to 2002 and from 2002 to 2005, with DLA and the Air Force more aggressively implementing new initiatives from 2002 to 2005 than did the Army and Navy. All components are working to design new information technology systems that could potentially improve administrative lead times. For example, DLA has just transitioned to its newly implemented information technology system, which officials said will help reduce process times for a number of transactions, shaving days off of administrative lead time. The components are also working on noninformation technology solutions. For example, Air Force officials recently said that they completed an initiative to reduce clutter on work desks, which involved redesigning all workspaces so that if an employee is absent, another employee can find any needed document in the absent employee’s desk within 5 minutes. They attributed this initiative to preventing bottlenecks that could occur if employees had to search for needed documents and information, potentially delaying the acquisition of items. The Army’s information technology initiative has only been implemented at one of its Life Cycle Management Commands and the Navy’s is still in the planning stages. One particular initiative that officials cited as having been effective in reducing administrative lead times for the Air Force and Army over the last decade has been the entering of technical data into the inventory control computer systems for items in stock before a need arises to order them again. According to officials, from 1994 to 2002, the Army in particular made significant progress in reducing lead times because of the entering of technical specification data. Before technical data for items were entered into computers, engineers often had to delay the acquisition process while they prepared technical drawings and wrote technical specifications. These delays ranged from days to several months. By determining technical specifications before there was a need for an item and saving these data in the computer system, officials were able to greatly reduce administrative lead times. They said that already having them in the system helped reduce lead times even when the technical specifications subsequently needed changing; however, they added that they have not completed entering technical specifications for all items. Although Army engineers have reduced workloads during certain periods of time when they have fewer orders to process, there are no efforts underway to enter technical specification data during these periods. An Army official indicated they were not entering technical specifications for items where the lead time savings would typically be fewer than 2 weeks, because such savings are not considered significant by Army officials. Army officials, however, made this determination without using any metrics or measures to determine the actual savings or cost of entering technical specifications for items with savings of fewer than 2 weeks. From 1994 to 2002, enhanced USD (AT&L) oversight and guidance contributed to the rapid pace of lead time reduction; however, from 2002 to 2005, USD (AT&L) no longer continued to monitor progress made by the components in reducing lead times, and all components experienced reduced management oversight. In 1994, we reported that USD (AT&L) was unaware of the lack of progress made in reducing lead times from 1990 to 1994 because of the absence of adequate oversight information. We also indicated that the data reported by military components did not include historical trends to indicate changes in lead time days before and after the lead time reduction initiatives were begun. Likewise, we reported that the statistics at that time were not comprehensive enough to tie specific initiatives to the lead time reductions experienced for individual initiatives. At the time, however, USD (AT&L) was able to provide a general estimate of the financial benefit of lead time reductions, determining that for each day that the DOD-wide average lead time is reduced, a procurement savings of $10 million can be realized. If the financial benefits of lead time reductions are the same in 2005 as they were in 1994, the value of the savings in 2005 dollars would be $12.5 million per day. On November 23, 1994, USD (AT&L) issued a memorandum to its components emphasizing the importance of fully implementing its guidance on reducing acquisition lead times. On March 8, 1995, according to DOD officials, components were challenged to reduce business process cycle times by at least 50 percent over the next 5 years (from 1995 to 2000). According to DOD officials, guidance and oversight were then applied to acquisition lead times through the budget process. However, by 2002, USD (AT&L) officials said they no longer provided active oversight on acquisition lead time or monitored the progress made by the components in reducing lead times, because management focus shifted from reducing lead times to improving performance on more broad metrics such as backorders. They added that they continued to monitor other broad metrics from 2002 to 2005 and did not establish lead time reduction goals or require standardized reporting of metrics designed to measure reductions in lead times. In addition, with the exception of DLA’s Strategic Material Sourcing initiative, USD (AT&L) and component officials said they did not collect data, establish metrics, or measure and report the impact and costs of any specific initiative on lead times. Without this information, USD (AT&L) and the components were unable to provide effective oversight on lead time reduction efforts. Furthermore, from 2002 to 2005, USD (AT&L) officials said they no longer measured the financial impact of lead time reductions on inventories. USD (AT&L) and the components thus have been unable to determine the relative value of pursuing lead time reductions when determining the best use of their resources. The inability to determine the financial impact on inventories of lead time reductions and the projected time saved from the proposed initiatives impedes the ability of decision makers to make informed choices as to which initiatives to implement. According to officials, without active USD (AT&L) oversight, all components experienced reduced management oversight from 2002 to 2005. Officials from the military components indicated that, because less emphasis was placed on lead times by USD (AT&L), less emphasis was placed on lead times at the component level. These officials said that component managers tend to place enhanced management focus on what they are held accountable for by USD (AT&L). Component officials suggested that renewed emphasis on lead time reduction by USD (AT&L), including the setting of lead time reduction goals, could increase the components’ management focus on reducing lead times. Until USD (AT&L) takes steps to exercise oversight as it did from 1994 to 2002, such as reemphasizing guidance, establishing lead time reduction goals, collecting data and establishing metrics to measure progress toward meeting lead time reduction goals, measuring and reporting on the results of individual initiatives, and measuring the financial impact of lead time reductions, the components and USD (AT&L) will not have available the information needed to effectively manage and provide oversight of lead times, hampering their ability to reduce lead times. Further, without this information, USD (AT&L) and the components will not be able to prioritize or reevaluate lead time reduction initiatives, determine the relative importance of lead time reduction when making contracting decisions, or determine the cost-effectiveness of lead time reduction efforts. Subsequent to September 2005, Air Force and DLA officials said they began planning and implementing new efforts to improve oversight, including setting lead time reduction goals, holding managers accountable for lead times, tracking lead times to ensure that goals were met, and regularly reporting lead times to managers. In addition, a new metric is also currently under development by DLA, called attainment to plan, which measures the ability of item supply planners to have material available when needed. DLA officials stated that they anticipate increased focus on lead times will improve performance of this metric. Moreover, USD (AT&L) officials stated they were working with the military components to define a DOD-wide lead time metric. They also stated in August 2006 that they were in the process of awarding a contract to a private company to evaluate if USD (AT&L) oversight of lead times would be worthwhile and stated that they currently were providing no oversight. USD (AT&L) officials indicated that increases in lead times could lead to increases in backorders, and said that they provide oversight on backorders. Initiatives to develop relationships with suppliers were implemented by all of the military components from 1994 to 2002. All military components implemented initiatives to improve contracting practices from 1994 to 2002 and continued them from 2002 to 2005. For example, each component used initiatives to increase use of long-term contracts to reduce lead times. According to Navy officials, one example of a successful initiative begun in the late 1990s was the Navy’s practice of considering lead times as criteria in contract awards for spare parts. Whenever issuing a new contract for spare parts, they said that the Navy sets as a criterion for the bid a 25 percent reduction in the item’s production lead time, and by adding this as a factor, the Navy is able to encourage suppliers to reduce lead times. In addition to continuing these prior initiatives, from 2002 to 2005 the DLA and the Air Force began new initiatives to strategically develop relationships with suppliers. According to DLA and Air Force officials, these new initiatives not only helped reduce lead times by allowing for streamlined and simplified purchasing of items on long-term contracts, but also (1) allow for increased information sharing with suppliers, (2) enable components to leverage their buying power, and (3) empower components to strategically target key items to ensure their availability. For example, according to DLA officials, their Strategic Material Sourcing initiative is intended to improve procurement for 3.6 million items designated as critical. Items are designated as critical based on a series of factors, then are grouped into categories, with different acquisition strategies being used for different categories of items. Of the 3.6 million items marked as critical, 390,000 were identified for placement on contracts strategically designed to leverage DLA’s market power to improve sourcing for these items. By forming alliances with producers of these items, DLA officials told us they have been able to reduce lead times by taking advantage of DLA’s buying power and by negotiating contracts that ensure supply availability in otherwise volatile markets. As of August 2006, one-half of these targeted items were already on strategic long-term contracts. According to officials, this initiative has thus far generated $247 million in gross savings with over $64 million generated in 2005 alone, while costing only $5.6 million to implement. These savings do not include savings from reduced storage costs, nor do they include the future savings expected as the program continues. This initiative is also unique in that DLA officials said they are using metrics to measure and report the effectiveness of the initiative, thereby improving accountability. An example identified by Air Force officials is the purchase supply chain management initiative. One of many parts of this initiative aimed at reducing lead times is the use of Commodity Councils to help improve acquisition of select items. Commodity Councils are groups of experts in particular commodity groupings who work together to improve acquisition of these items. They do so through commodity management, which is the process of developing a systematic approach to the entire usage cycle for a group of items. In addition, USD (AT&L) is in the process of implementing a new initiative to improve commodity management DOD-wide. This new initiative seeks to emulate the successes of commodity management programs run by DLA and the Air Force across DOD. In contrast, the Army and the Navy, while continuing old initiatives, have not developed new initiatives to develop strategic relationships with suppliers for critical items. Army and Navy officials indicated that they are content with the lead time reductions experienced and stated that new initiatives were not undertaken because of a lack of USD (AT&L) focus and oversight on lead time reduction. Officials cited ongoing military operations as one of the primary factors diverting attention away from reducing lead times. While the Army and Navy continue to benefit from the lead time reductions generated from past initiatives, until these two components begin initiatives to develop strategic relationships with suppliers, they may be unable to realize the potential benefits from improved supplier relationships and may continue to experience lower rates of lead time reductions than DLA and the Air Force. The military components could have decreased inventory requirements and saved money if more aggressive lead time reductions had been realized from 2002 to 2005, as they had from 1994 to 2002. DOD budget documents indicate that inventory requirements to cover lead times increased from $15.6 billion in 2002 to $19.9 billion in 2005. According to officials, the primary reason for the increase in inventory has been increased demand due to recent military operations. As a result, even as lead times were reduced by an average of 0.9 percent a year from 2002 to 2005, requirements to cover lead times rose. If the military components had been able to continue reducing lead times by an average of 5.6 percent a year, as they did from 1994 to 2002, the military components’ lead time inventory requirements would only have risen to $17.2 billion, rather than to $19.9 billion, as shown in figure 1. The additional lead time requirements potentially tied up $2.7 billion that could have been obligated for other needs. In addition to the potential savings associated with decreased inventory requirements, if the military components had been able to continue reducing their lead times at 5.6 percent per year, it would have led to a significant savings from a reduced need to maintain “safety” inventory, which is the amount of inventory the military components maintain on- hand to cover supply and demand fluctuations. This level is determined by a formula that includes a number of factors, including lead times. Reductions in lead times can significantly impact safety inventories needed. Due to reduced USD (AT&L) oversight of lead times, we were unable to determine how reducing lead times would financially impact procurement costs for safety inventories. However, in 1994 we reported that if the components could reduce their overall lead times by 25 percent by 2000, it would lead to a procurement savings of about $910 million. Until USD (AT&L) and the components take steps to renew their focus on reducing lead times by aggressively continuing prior initiatives and implementing successful new initiatives, the components may continue to experience spare parts shortages and may spend significantly more money to purchase additional inventory. Acquisition lead times are the military components’ estimates as to when items will arrive, and varying from that expectation increases the likelihood that the right supplies will not be at the right place at the right time. When the components understate their lead time estimates, material shortages and reduced readiness can occur. Without more accurate lead time estimates, the components will not place orders and obligate funds as early as necessary, and they may miss opportunities to potentially improve readiness rates. Conversely, overstated and lengthy acquisition lead time estimates can cause early obligation of funds as well as increases in on- hand inventories, although spare parts that come in early could potentially improve readiness. Until the Army reviews and evaluates when deliveries are representative and should be used to update lead time values, maintains lead time data in each of its computer systems, and validates data input, later than expected deliveries and potential parts shortages will likely occur. In addition, absent actions by DLA to review and revise the methodology and inputs it uses to compute lead time estimates, DLA will continue to obligate funds earlier than necessary and have early delivery of items. Moreover, without taking steps to review and validate default lead time estimates and consider other options for obtaining better lead time data, the Air Force will continue to experience early obligation of funds and potential parts shortages. Finally, until the Navy reviews and validates its lead time data and corrects errors, parts shortages and early obligation of funds are likely to continue. Absent actions by all of the military components to address these problems and institute corrective procedures, their acquisition lead time estimates will continue to vary greatly from their actual lead times. The military components have also slowed their efforts to reduce acquisition lead times as compared to earlier years. Their current lead time reduction rate may not be significant enough to offset the costs of growing requirements. Until USD (AT&L) and the military components take steps to renew their focus on reducing lead times by continuing prior initiatives and implementing successful new initiatives to streamline administrative processes, improve oversight, and develop strategic relationships with suppliers, they will be unable to significantly reduce lead times as they were able to do in the past. As a result, the military components may potentially spend hundreds of millions of dollars to purchase additional inventory. Increased emphasis on improved lead time estimates and overall lead time reductions will improve the military components’ ability to efficiently use available resources. To improve the military components’ accuracy in setting acquisition lead time values, we recommend that the Secretary of Defense take the following six actions. 1. Direct the Secretary of the Army to have the Commanding General, Army Materiel Command, direct the Aviation and Missile Life Cycle Management Command to establish clear guidelines for item managers to know when to review and how to determine whether deliveries should be considered representative and thus used to update lead times. 2. Direct the Secretary of the Army to have the Commanding General, Army Materiel Command, direct the Life Cycle Management Commands to reemphasize the importance of periodically reviewing and validating their recorded lead time data to detect and correct data input errors and other inaccurate information. 3. Direct the Secretary of the Army to have the Commanding General, Army Materiel Command, direct Communications-Electronics Life Cycle Management Command to maintain and update automated lead time data within its Logistics Modernization Program computer system. 4. Direct the Director of DLA to have its supply centers review the methodology and inputs used to compute its lead time estimates and revise them to incorporate recent improvements in DLA actual lead times. 5. Direct the Secretary of the Air Force to have the Commander, Air Force Materiel Command, direct its air logistic centers to use better sources of lead time information, such as supplier estimates, if available, rather than default values for items that have not been ordered in the last 5 years. 6. Direct the Secretary of the Navy to direct the Commander, Naval Inventory Control Point, to reemphasize the importance of having its inventory control points periodically review and validate their recorded lead time data to detect and correct data input errors or other inaccurate information. To strengthen DOD’s and the military components’ management of acquisition lead times, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Acquisition, Technology, and Logistics to take the following five actions. 1. Establish component lead time reduction goals over a 5-year period from October 2007-2012. 2. Develop metrics to measure components’ progress toward meeting lead time reduction goals and require the periodic reporting of these metrics. 3. Develop a general estimate of the financial impact of lead time reductions, and use that as a metric to help components weigh the importance of lead time reductions. 4. Direct the components to collect data, establish metrics, and measure and report the impact of individual lead time reduction initiatives, to include the cost of each initiative and its estimated cost savings. 5. Work closely with the Army and Navy to develop joint strategic relationships with suppliers that would be beneficial in reducing lead times. In written comments on a draft of this report, DOD concurred with eight, partially concurred with one, and did not concur with two of our recommendations. For the eight recommendations with which DOD concurred, the department identified actions and plans that are being taken to implement these recommendations. We agree that most of the identified actions are responsive and reasonable to address our concerns, although in several cases the final actions may not be completely implemented for several years. However, some of the department’s comments did not appear to address our concerns. More specifically, for one of the recommendations with which DOD concurred, we do not believe that its comments address our recommendation that the Army maintain and update automated lead time data within its Logistics Modernization Program computer system. In its comments, DOD said that this computer system does not provide automatic updates of data for calculation but it does have information needed to make decisions for manual implementation. As stated in our report, manual input errors have contributed to inaccuracies in lead times, and we believe these inaccuracies will continue if the department relies on manual implementation. We continue to believe that automated updates and maintenance of lead time data are needed to improve the accuracy of lead time estimates. Further, DOD stated in its comments that it already had actions underway to address our recommendation to develop metrics to measure progress toward meeting lead time reduction goals. However, the contract for reviewing lead times is not to be awarded until later in fiscal year 2007. Since this effort was not underway at the time of our review, we believe that it is important to recommend that this effort be pursued until fully implemented. DOD partially concurred with our recommendation that the Under Secretary of Defense for Acquisition, Technology, and Logistics develop a general estimate of the financial impact of lead time reductions, and use that as a metric to help components weigh the importance of lead time reductions. DOD stated that to the extent that financial impact can be estimated, it will be one of the elements considered in a review DOD expects to conclude in 2008. DOD further stated that the challenge in estimating the financial impact of lead time reductions was that there are many other variables, and the effect of individual variables on lead time estimates cannot be separately identified. We recognize that the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics has concerns about its ability to estimate the financial impact of lead time reductions, but note that it was able to provide an estimate of $10 million in financial impact for each day that lead time was reduced when we published our 1994 report. Moreover, during our review, TACOM officials informed us that they have the ability to simulate the impact of reductions in lead times using their requirements determination process system on an item-by-item basis. The potential savings generated from the simulations could be helpful in estimating the savings from lead time reduction initiatives. We further note that the inability to determine the financial impact of lead time reductions does not provide the needed incentives for the components to reduce lead times and impedes the ability of decision makers to make informed choices as to which initiatives to implement. Therefore, we continue to believe that the recommendation to the Under Secretary of Defense for Acquisition, Technology, and Logistics is valid. In addition, DOD did not concur with our recommendation to DLA to have the supply centers review the methodology and inputs used to compute its lead time estimates and revise them to incorporate recent improvements in DLA actual lead times. DOD stated that our review used data primarily from DLA’s legacy system from 2002 to 2005, which was prior to DLA’s implementation of its new computer system called Business Systems Modernization, and stated that consequently the benefits of this new system and processes were not taken into account in our review. While we agree that the implementation of this new computer system should provide DLA with more tools to manage acquisition lead times, according to DLA’s Cross-Process Policy Memorandum 06-001 dated June 1, 2006, the basic methodology for automatic adjustments to both administrative and production lead times remains the same in the new system as under the legacy system (i.e., each is calculated as a weighted average based on one- third of the existing lead time of record and two-thirds of the actual or new lead time for the current award). Calculating the lead times in the same manner but recording the values in a newly implemented computer system will not improve the accuracy of the lead time estimates. Therefore, we continue to believe that the recommendation to DLA is valid. Moreover, DOD did not concur with our recommendation that the Under Secretary of Defense for Acquisition, Technology, and Logistics work closely with the Army and Navy to develop joint strategic relationships with suppliers that would be beneficial in reducing lead times. The department stated that it is actively pursuing a joint strategy to develop strategic relationships, and that to instruct the services to develop strategic relationships separately with these suppliers would lead to a duplication of effort and dissipate the department’s leverage. We believe that DOD misunderstood our recommendation. The joint strategy initiative that DOD is actively pursuing, according to documentation provided by DOD, is focused on commodity management, not on developing strategic relationships to reduce lead times. Our recommendation calls for the Under Secretary of Defense for Acquisition, Technology, and Logistics to work closely with the Army and Navy to move beyond simply managing the acquisition of individual parts, and to form strategic partnerships with key suppliers for ranges of items in situations where it would be possible to leverage these relationships to reduce lead times. Documentation from DOD further states that DOD’s commodity management plan acknowledges that service initiatives will produce improvements, and that it respects those initiatives. Our recommendation, for the Under Secretary of Defense for Acquisition, Technology, and Logistics to work closely with the Army and Navy to develop similar initiatives to those already underway by DLA and the Air Force, is not duplicative of ongoing efforts, but would complement them. Until the Army and the Navy begin initiatives to develop strategic relationships with suppliers, they may be unable to realize the potential benefits from improved supplier relationships and may continue to experience lower rates of lead time reductions than DLA and the Air Force. Therefore, we continue to believe that the recommendation to the Under Secretary of Defense for Acquisition, Technology, and Logistics is valid. The department’s comments are reprinted in appendix II. We are sending copies of this report to the Chairmen and Ranking Minority Members of the Senate Committee on Armed Services; the Subcommittee on Readiness and Management Support, Senate Committee on Armed Services; the Subcommittee on Defense, Senate Committee on Appropriations; the House Committee on Armed Services; the Subcommittee on Readiness, House Committee on Armed Services; and the Ranking Minority Member, Subcommittee on Defense, House Committee on Appropriations. We are also sending copies to the Secretary of Defense; the Secretaries of the Army, Navy, and Air Force; the Director of DLA; and the Under Secretary of Defense for Acquisition, Technology, and Logistics. Copies will be made available to others upon request. Should you or your staff have any questions concerning this report, please contact William M. Solis, Director, at (202) 512-8365 or solisw@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. To address our objectives, we reviewed relevant documents, guidance, reports, and other information, as available, which related to acquisition lead times for class IX spare parts and any initiatives the Department of Defense (DOD) or the military components were undertaking in this area. We also interviewed cognizant officials within the Office of the Under Secretary of Defense (Acquisition, Technology, and Logistics); the Defense Logistics Agency Headquarters; the Army Materiel Command Headquarters; Headquarters Air Force, the Deputy Chief of Staff, Installations, and Logistics, Inventory Management and Stockage Branch; and the Naval Supply Systems Command, Naval Inventory Control Point- Mechanicsburg, Pennsylvania. We also performed additional work at the Air Force Materiel Command Headquarters at Wright-Patterson Air Force Base, Ohio, had discussions with officials at the U.S. Army Tank Automotive and Armaments (TACOM) Life Cycle Management Command in Warren, Michigan, and obtained data from U.S. Army Communications- Electronics Life Cycle Management Command, U.S. Army Aviation and Missile Life Cycle Management Command, and the Naval Inventory Control Point-Philadelphia, Pennsylvania. To examine the extent to which the military components’ estimated lead times varied from actual lead times, we obtained and reviewed information from each military component concerning any relevant policies, procedures, regulations, instructions, or memorandums about acquisition lead time development, maintenance, or management. We also obtained information regarding the processes used by the military components in generating their acquisition lead times from discussions with cognizant officials. To test the accuracy of the military components in estimating the acquisition lead times and the related actual arrival of items ordered, we requested that each military component provide us with a data file that contained the following information for class IX spare parts they each received between October 1, 2004, and September 30, 2005: date ordered, ordered from what company, quantity ordered, date delivered, quantity delivered, where delivered, purchase order number or some other financial related reference, cost per item, item name, item NSN, total cost of order, forecasted/on-file administrative lead time for item at time of order, forecasted/on-file production lead time for item at time of order, and overall acquisition lead time for item. For DLA and the Air Force, we obtained data that covered deliveries to all three of their supply centers and Air Logistic Centers, respectively. In regard to the Army, we obtained data from three Life Cycle Management Commands: TACOM, Communications-Electronic, and Aviation and Missile. We also obtained data from the Naval Inventory Control Points that are located in Mechanicsburg, Pennsylvania and Philadelphia, Pennsylvania. We compared the forecasted/on-file estimated lead times for each delivery with the actual lead times experienced, and then grouped the variances into five different categories. The categories were the actual lead time (1) was within plus or minus 1 week from the estimated lead time, (2) was greater than 1 week to less than 90 days earlier than the estimated lead time, (3) was 90 or more days earlier than the estimated lead time, (4) was greater than 1 week to less than 90 days later than the estimated lead time; and, (5) was 90 or more days later the estimated lead time. For all of the records in each category, we calculated the percent of records in each category as compared to the total number of records reviewed and also calculated their dollar value. We took steps to ensure the reliability of the data we used in our review. We provided a list of specific data elements to the Army, Navy, Air Force, and DLA officials. The military components returned the requested information to us. To assess the reliability of these data, we reviewed the data for obvious inconsistency and completeness errors. In addition, we worked with agency officials to identify any data problems. When we found discrepancies (such as nonpopulated fields or data discrepancies), we brought them to the officials’ attention and worked with them to correct the errors. In addition, we sent an electronic questionnaire with questions regarding our use of the data and followed up on issues we believed were pertinent regarding the reliability of the data. Based on these efforts, we determined that the data were sufficiently reliable for the purposes of our report. To examine the extent to which military components’ current management actions, initiatives, and other programs have reduced lead times and affected inventory and budget requirements, we obtained and reviewed information from each military component concerning any relevant policies, procedures, regulations, instructions, or memorandums regarding efforts, policies, actions, or initiatives to reduce lead times. We also interviewed officials within the Office of the Under Secretary of Defense (Acquisition, Technology, and Logistics); the Defense Logistics Agency Headquarters; the Army Materiel Command Headquarters; Headquarters Air Force, The Deputy Chief of Staff, Installations, and Logistics, Inventory Management and Stockage Branch; and Naval Supply Systems Command, Naval Inventory Control Point-Mechanicsburg, Pennsylvania. We also performed additional work at the Air Force Materiel Command Headquarters at Wright-Patterson Air Force Base, Ohio and had discussions with officials at the U.S. Army TACOM Life Cycle Management Command in Warren, Michigan. We further examined budget stratification data from the Army, Navy, Air Force, and the Defense Logistics Agency. Using that budget stratification data, we reviewed all items present in the September 30 budget stratification reports for both 2002 and 2005 to determine the changes in average acquisition lead time for those items. We were unable to obtain budget stratification data for the components for 1994, and thus simply reported the results of our 1994 GAO report evaluating overall lead times for each component. Additionally, we requested and analyzed the summary budget stratification reports for all components for September 2002 through September 2005 to determine any changes in average acquisition lead time and budget requirements from 2002 to 2005. Based on our efforts, we determined that the data were sufficiently reliable for the purposes of our report. We conducted our work from November 2005 through November 2006 in accordance with generally accepted government auditing standards. In addition to the contact listed above, Lawson Gist, Jr., Assistant Director, Rebecca Beale, Christopher Miller, Terry Richardson, Grant Mallie, Catherine Hurley, Minette Richardson, Nancy Hess, Art James, Renee Brown, Gayle Fischer, Kenneth Patton, and Nicole Harms made key contributions to this report. | GAO has identified the Department of Defense's (DOD) management of its inventory as a high-risk area since 1990 due to ineffective and inefficient inventory systems and practices. Management of inventory acquisition lead times is important in maintaining cost-effective inventories, budgeting, and having material available when needed, as lead times are DOD's best estimate of when an item will be received. Under the Comptroller General's authority to conduct evaluations on his own initiative, GAO analyzed the extent to which (1) DOD's estimated lead times varied from actual lead times, and (2) current management actions and initiatives have reduced lead times as compared to past years. To address these objectives, GAO computed the difference between the components' actual and estimated lead times, and compared component initiatives to reduce lead times for 1994-2002 to 2002-2005. The military components' estimated lead times to acquire spare parts varied considerably from the actual lead times experienced. The effect of the lead time underestimates was almost $12 billion in spare parts arriving more than 90 days later than anticipated, which could negatively affect readiness rates because units may not have needed inventory. If orders had been placed earlier, readiness rates could potentially have been improved. While having spare parts arrive earlier than estimated could potentially improve readiness, the effect of lead time overestimates resulted in obligating almost $2 billion more than 90 days earlier than necessary. The Army underestimated lead times, the Defense Logistics Agency (DLA) overestimated lead times, and the Air Force and Navy both overestimated and underestimated lead times. The variances were due to problems such as miscoding late deliveries as not representative of future delivery times, lack of recorded lead time data, data input errors, estimates that did not reflect improvements made in actual lead times, and the use of standard default data instead of other data that may have been obtainable. Absent actions to address these problems, lead time estimates will continue to vary from actual lead times and will contribute to inefficient use of funds and potential shortages or excesses. The Under Secretary of Defense for Acquisition, Technology, and Logistics (USD (AT&L)) and the components' actions and initiatives to reduce lead times from 2002 to 2005 were less effective overall than previous efforts from 1994 to 2002. From 2002 to 2005, DOD-wide lead times were reduced by an average of 0.9 percent annually as compared to an average reduction of 5.6 percent annually from 1994 to 2002, potentially leading to an additional $2.7 billion in lead time requirements, tying up money that could have been obligated for other needs. The higher rate of reduction from 1994 to 2002 can be attributed to three areas of focus: streamlining internal administrative processes, oversight from USD (AT&L), and developing strategic relationships with suppliers. However, from 2002 to 2005, USD (AT&L) no longer provided active oversight such as establishing lead time reduction goals, reporting metrics, reporting the impact of specific initiatives, or estimating the financial impact of reduced lead times, as had been done previously. Until steps are taken to renew management focus on reducing lead times, the components may continue to experience spare parts shortages and increased inventory levels to cover lead times. |
The SSI program is authorized by title XVI of the Social Security Act. To qualify for SSI, an individual must meet financial eligibility and age or disability criteria. Generally, SSA determines an applicant’s age and financial eligibility; the state’s Disability Determination Service determines an applicant’s initial medical eligibility. The maximum monthly benefit in 1995 was $458 per month for an individual, increased to $470 in 1996. An individual is ineligible for SSI in any given month if throughout that month he or she is an inmate of a public institution (42 U.S.C. 1382 (e)(1)(A)). The title XVI regulation defines an inmate of a public institution as a person who can receive substantially all of his or her food and shelter while living in a public institution. SSA operating instructions provide that a prison is a public institution. SSI recipients may receive their payments in one of several ways: (1) SSI checks can be mailed to them at their residences or, in some cases, to post office boxes; (2) SSI checks can be direct-deposited into recipients’ checking or savings accounts; or (3) SSI checks can be sent to recipients’ representative payees—individuals or organizations that receive checks on behalf of SSI recipients who are unable to manage their own affairs (including legally incompetent people, alcoholics, drug addicts, and children). A representative payee is responsible for dispensing the SSI payment in a manner that is in the best interest of the recipient. Many events can affect a recipient’s eligibility or payment amount. SSA requires that recipients voluntarily report these events and also monitors and periodically reviews recipients’ financial eligibility. SSI recipients are responsible for reporting information that may affect their eligibility or payment amounts. If the recipient has a representative payee, the payee is responsible for reporting such information to SSA. Significant events to be reported include a change in income, resources, marital status, or living arrangements, such as admission to or discharge from a public institution. A redetermination is a review of financial eligibility factors to ensure that recipients are still eligible for SSI and receiving the correct payment. A redetermination addresses financial eligibility factors that can change frequently, such as income, resources, and living arrangements. Redeterminations are either scheduled or unscheduled. They are conducted—by mail, telephone, or face-to-face interview—at least every 6 years, but may be conducted more frequently if SSA determines that changes in eligibility or erroneous payments are likely. The redetermination process includes a question about whether the recipient spent a full calendar month in a hospital, nursing home, other institution, or any place other than the recipient’s normal residence. Since the SSI program was established, SSA has recognized the potential for erroneous payments if SSI recipients become residents of public institutions, including state and federal prisons and county and local jails. SSA headquarters has established computer-matching agreements with state prison systems and the federal Bureau of Prisons. Under these agreements, the participating states and the Bureau can regularly provide automated prisoner information to SSA. SSA matches the information against its payment records to identify SSI recipients incarcerated in state and federal prisons. According to information provided by SSA, the process of matching prisoner information against the SSI payment records is a cost-effective way to identify SSI recipients who are in prison. However, to succeed, SSA determined it is essential that field offices work closely with public institutions, both county and local, to facilitate the flow of information concerning the SSI population. Accordingly, SSA has, for years, instructed its field offices to (1) maintain regular contact (for example, regular visits) with prisons in their areas and (2) establish procedures for promptly obtaining information on events, such as admissions and discharges, that affect SSI eligibility and payment determinations. On May 24, 1996, the Commissioner of Social Security sent draft legislation to the Congress. This proposed legislation is designed to promote timely carrying out of SSI provisions requiring cessation of payments to prisoners. The legislation would authorize the Commissioner to enter into agreements with willing state and local “correctional facilities.” Under these agreements, the Commissioner would pay the facility for each report of a newly admitted inmate who has been a Social Security or SSI beneficiary but is not, as a prisoner, entitled to payments. In August, the Congress passed The Personal Responsibility and Work Opportunity Reconciliation Act of 1996. The act authorizes the Commissioner of SSA to enter into agreements with interested institutions. Under these agreements, the institutions would provide SSA with the names, SSNs, and other information about their inmates. SSA, subject to the terms of the agreements, would pay an institution for each inmate who SSA subsequently determines is ineligible for SSI. The act specifies, however, that the institution’s primary purpose must be to confine individuals for offenses punishable by confinement for more than 1 year. This 1-year requirement would seem to preclude SSA from entering into agreements with, as well as making payments to, county and local jails, which generally incarcerate prisoners for shorter periods. Overall, in the jail systems we reviewed, we detected a total of $5 million in erroneous SSI payments to prisoners. This includes $3.9 million to 2,343 current prisoners in 12 jail systems and $1.1 million to 615 former prisoners in 2 jail systems. Typically, an erroneous payment continued for 6 months or less and totalled about $1,700. SSA was unaware that many of these payments had occurred. SSA had made erroneous payments to 2,343 prisoners, who were incarcerated in the 12 jail systems at the time of our work. These 2,343 prisoners represent about 4 percent of the prisoners with verified SSNs in these jail systems. As shown in table 1, SSA made payments to some prisoners in each of the 12 jail systems. The percentage of prisoners who received SSI payments differed somewhat among these jail systems, ranging from 2 to about 7.7 percent. In addition, there were 926 SSI recipients in jail at the time of our review who had not yet been there for 1 full calendar month. Collectively, these 926 prisoners were being paid about $387,000 a month. To the extent these prisoners remain in jail for at least 1 calendar month and SSA remains unaware of their incarceration, SSI payments made after a full month of incarceration would be erroneous. In the 12 systems we reviewed, as of the date we reviewed each system, we estimate that SSA paid $3,888,471 to the 2,343 current prisoners (see table 2). The average amount paid to an individual prisoner varies among the jail systems, but the overall average is approximately $1,700. Some payments are much larger. Erroneous payments to individual prisoners ranged from less than $100 to over $17,000. We determined that 136 prisoners received in excess of $5,000, including 19 who received more than $10,000. The percentage of current prisoners by range is shown in figure 1. Large erroneous payments to prisoners occurred because SSA paid some of them for long periods of time. For example, one SSI recipient was arrested on June 27, 1993, and was still in jail on November 30, 1995. SSA paid this prisoner monthly for this entire period. The erroneous monthly payments totaled about $13,000. As of November 30, 1995, this SSI recipient was still in jail and SSA was continuing to pay him. We determined that 85 percent of the 2,343 current prisoners had received erroneous payments for a period of 6 months or less, at the time of our review. However, some were paid for longer periods. We found a total of 94 prisoners that had been paid for more than 1 year, including 13 who were paid for more than 2 years. The range of months during which payments continued is shown in table 3. The erroneous payments to current prisoners are likely to increase. Based on a review of SSA’s records, we estimate that at the time of our review, SSA was unaware that 1,570 of the 2,343 recipients were in jail. SSA therefore continued to erroneously pay them. But SSA had stopped paying the remaining 773 and, for some of them, established an overpayment. We obtained information, from two jail systems, for 15,998 former prisoners who were released from jail between January 1 and June 30, 1995. We determined that of these former prisoners, 615 (3.8 percent) received SSI while incarcerated. In total, these former prisoners received about $1.1 million in payments. The number of former prisoners, total erroneous payments, and average amount to individual prisoner by jail system are shown in table 4. Included in the count of 419 former prisoners in Cook County are 17 who were also in our population of current prisoners. This indicates that these 17 were in prison and received SSI payments on at least two occasions. In Cook County, where we had data for both current and former prisoners, erroneous payments to former prisoners were higher. In that county, about 73 percent of the former prisoners were erroneously paid $1,000 or more, compared with 48 percent of the current prisoners. The difference is predictable because former prisoners have completed their time in the county or local jail and current prisoners have not. In Wayne County, where we only had data on former prisoners, 38 percent of the former prisoners were erroneously paid $1,000 or more. Based on a review of SSA’s records, we estimate that SSA is unaware that it erroneously paid 454 (74 percent) of the 615 former prisoners (see table 5). As of December 1995, SSA was making SSI payments to 340 of the 454 former prisoners. However, SSA was not recovering these payments by withholding a portion of the current payments. Our review suggests that many of the erroneous payments to prisoners stem from the fact that SSA field offices were not following existing instructions. These indicate that field offices should contact county and local jails to detect incarcerated SSI recipients. Other reasons for such payments include SSI recipients (or their representative payees) not reporting incarcerations and redeterminations not identifying some incarcerated SSI recipients. At the start of our review, we contacted 23 county and local jail systems to determine if they were regularly providing prisoner information to SSA. Only 1 county was, although a few said SSA contacted them occasionally to determine if specific people were incarcerated. In addition, 1 other county indicated that it initiated contact with SSA, but had not provided data. SSA had contacted 6 additional systems about regularly obtaining information on prisoners, but these had not yet provided any data. The remaining 15 systems reported that they had not been contacted by SSA about regularly providing information on prisoners. For example, according to an SSA branch office manager, no one from SSA had visited the jails in the office’s service area in more than 20 years. Our review of SSA records indicates that although some SSI recipients or their representative payees report incarceration to SSA as required, many do not. We determined that of the 615 former prisoners who were erroneously paid, 217 had representative payees while in prison. We also determined that of these representative payees, 164 did not report the SSI recipient’s incarceration. About 87 percent of the representative payees who did not report were relatives; 1 percent were social agencies or other types of public and private organizations; and 12 percent were “other” types. Similar reporting problems were noted for current prisoners. In the redetermination process, SSA attempts to verify that recipients remain financially eligible for SSI and receive the correct payment. SSA records indicate that while in jail, 88 prisoners each had one redetermination and 4 prisoners each had two or more. We found that 32 of these 92 prisoners continued to be incarcerated and receive SSI payments after the redeterminations. According to SSA records, 22 of these redeterminations involved face-to-face contact between an SSA employee and the recipient or the representative payee. According to SSA officials, it is possible for inmates who are temporarily free, on work release or some other similar arrangement, to appear for a redetermination and subsequently return to jail. In addition, representative payees may complete the redeterminations, including face-to-face, on behalf of the SSI recipient. The identity of the actual individual who appeared at the face-to-face redetermination is not included in SSA’s computerized record, and a detailed review to determine who appeared at the interview was beyond the scope of our work. SSA’s operating instructions contain provisions for field offices to contact local jails in order to obtain prisoner data from them. However, SSA only recently began implementing this program systematically. According to agency officials and internal documents, most of the jails nationwide had been contacted by April 1996 to obtain information on current prisoners and future admissions, but not on former prisoners. According to agency officials, in March 1995, SSA field offices were instructed to contact local jails in their service areas and report to their regional offices concerning which jails would agree to provide SSA with prisoner data. However, the field offices did not consistently comply with these instructions, these SSA officials stated. In October 1995, after the start of our review, SSA headquarters issued a follow-up memo to the regional offices, directing them to instruct their field offices to (1) complete a detailed census of all jails in their jurisdictions and (2) report to headquarters by November 30, 1995. It was during this period of time that the agency initiated a concerted effort to contact all county and local jails nationwide. According to agency officials, prisons and jails are being contacted in the following order: (1) all state prisons, (2) the 25 largest county and local jails nationwide, and (3) all other county and local jails. According to SSA documents, as of March 1996, SSA had identified 3,878 county and local jails: SSA had obtained written agreements covering 2,647 of these and had agreements pending with 235. In addition, 843 jails were already reporting to SSA or held prisoners for less than 30 days; 153 jails had not responded or had refused to cooperate. SSA has requested that facilities it has contacted provide lists of their inmates to the local field offices. The agency has offered flexible reporting guidelines for frequency and format of the lists (computerized or on paper). In general, SSA has requested that facilities that have provided data to it previously or on a trial basis continue providing data. In addition, SSA has requested that facilities that have not provided any lists in the past provide (1) a current census of their inmates and (2) continuing lists of new admissions to the facility. Specifically, we found that SSA has contacted the 25 largest jail systems in the country and requested prisoner data from them. Most of these systems had agreed to supply SSA with prisoner data beginning in early to mid-1996. One system (Orange County, Calif.) began providing data in April 1995, and another system (New York City) has agreed to a pilot project including data beginning with January 1995. For many years, SSA has lacked an effective program to detect SSI recipients in county and local jails. It has relied primarily on (1) the recipients or their representative payees to voluntarily report incarceration and (2) redeterminations. Neither of these mechanisms has been completely effective; as a result, SSA has erroneously paid millions of dollars to thousands of prisoners in county and local jails. SSA was unaware of most of these payments. The number of SSI recipients who received SSI while in jail, including those with representative payees and those with redeterminations, raises numerous questions, including whether payments were obtained fraudulently. SSA’s recent initiative—to obtain better information on SSI recipients currently in county and local jails—is a positive step. However, the effort is not comprehensive enough. In general, SSA has begun to obtain information on current prisoners and new admissions. But SSA has not attempted to develop information, when available, on SSI recipients who may have been incarcerated and received payments in prior years. We found that this information is available and can provide SSA the means to identify and initiate recovery of many more erroneous payments. In order to identify SSI recipients who have been erroneously paid in prior years, we recommend that the Commissioner of SSA direct SSA field offices to obtain information from county and local jails on former prisoners. SSA should then process this information to (1) determine if it made erroneous payments to any of these former prisoners, (2) establish overpayments for the ones it paid, and (3) attempt to recover all erroneous payments. SSA commented on a draft of our report in a letter, dated July 16, 1996, and acknowledged that investigation of the productivity of securing information on former prisoners appears desirable and worthy of further examination. However, SSA expressed concerns about the availability of data, the potential negative effect of requests for more data on existing reporting arrangements with county and local jail officials, the cost-effectiveness of processing data on former prisoners who may no longer be receiving SSI payments, and other matters. SSA believes these concerns need to be resolved before implementing our recommendations. (The full text of SSA’s comments is included in app. III.) During its recent initiative to identify current prisoners, SSA identified local officials who know what data are available and can be provided. It should not be difficult or time-consuming, therefore, for SSA to contact these officials and determine if information on former prisoners is available. In addition, to identify information on former prisoners, SSA need not establish that “the majority” of county and local jail systems have such information, given that the largest jail systems account for the majority of prisoners. During the course of its initiative, SSA expanded the number of agreements with local correctional facilities to report prisoner information. According to SSA, some of these facilities were initially reluctant to enter into these agreements because SSA does not have the authority to pay for this information. However, unlike information on current prisoners, which requires monthly or quarterly reporting, information on former prisoners only requires a onetime effort by the local jail systems. Therefore, SSA need not assume that requesting such data will jeopardize existing agreements. If county and local jail systems are initially reluctant to provide data on former prisoners, SSA could emphasize the potential benefit to state programs (such as the recovery of erroneously paid state supplements) that such data exchanges may provide. We agree that SSA stands a better chance of recovering erroneous payments if the former prisoner is still receiving SSI. However, the fact that he or she is not currently receiving SSI should not prevent the implementation of our recommendation. To ensure program integrity, SSA has a responsibility to identify erroneous payments and collect overpayments. Once established, overpayments made to former prisoners remain in the record and could be recovered if the person again begins to receive SSI. Furthermore, SSA has the authority to recover SSI debts through a tax refund offset. SSA also took issue with the fact that we reported that until recently, identifying prisoners was not a priority at SSA. According to SSA, however, policies and operating procedures call for field offices to (1) maintain contacts with local institutions and (2) determine prisoner eligibility for payments. In our review, we found that field offices had not been following this guidance. We made minor changes to the text of the report to clarify this point. SSA also expressed concern about a statement in the report that erroneous payments to prisoners may be partially due to the vulnerability of redeterminations to abuse. Although we do not discuss the redetermination process in great detail, our review of SSA records indicates that 32 of 92 prisoners in our sample continued to receive benefits after a redetermination. If this process had been working as intended, SSA would have determined that these prisoners were no longer eligible to receive benefits. We made minor changes to the text of the report to clarify this point. We are sending copies of this report to interested congressional Committees and Subcommittees; the Director, Office of Management and Budget; and other interested parties. This report was prepared under the direction of Christopher C. Crissman, Assistant Director. Other GAO contacts and staff acknowledgments are listed in appendix IV. To determine if jail systems provide information on prisoners to SSA, we contacted 23 large county and local jail systems that met the following criteria: (1) a minimum average daily prisoner population of at least 1,000, with emphasis on the largest U.S. metropolitan areas, (2) geographic dispersion, and (3) populous SSA regions. Of the 23 systems we contacted, we subsequently requested data from the 13 that met the following additional criteria: (1) an ability to provide us with automated data tapes suitable for matching, (2) willingness to provide the data at no cost, and (3) not currently providing SSA with prisoner data. Based on the above criteria, between September 1995 and January 1996, we obtained automated data on current prisoners from 12 county and local jail systems. They collectively represent about 20 percent of the county and local prisoner population nationwide. The jail systems that provided data to us are in 10 states, in 6 of SSA’s 10 regions. The jail systems that provided current prisoner data to us were: Broward County (Fla.); Cook County (Ill.); Dade County (Fla.); Hamilton County (Ohio); Harris County (Tex.); King County (Wash.); Los Angeles County (Calif.); Maricopa County (Ariz.); New York City; Orange County (Fla.); Santa Clara County (Calif.); and Shelby County (Tenn.). In addition, during February and March 1996, we obtained data on former prisoners from Cook County and from Wayne County (Mich.). From 12 of the county and local jail systems, we obtained data for prisoners who were under their jurisdiction on specific dates. The dates were selected by the jail systems, based on their available resources. Jail systems also supplied available personal identifiers, including name, Social Security number (SSN), date of birth, place of birth, mother’s maiden name (or next of kin), ethnicity or race, home address, and date of incarceration. We received information on a total of 97,813 current prisoners and eliminated duplicate records. This reduced the initial universe to 79,595 prisoners. We processed the information on these prisoners through SSA’s Enumeration Verification System (EVS), which uses key variables (name and date of birth) to verify the SSNs provided or determine an SSN if none is provided. We obtained verified SSNs for 53,420 of the 79,595 prisoners. We could not verify SSNs for the remaining 26,175 prisoners. To determine which prisoners had SSI records, we matched the verified SSNs against the Supplemental Security Record. We identified 12,951 prisoners with SSI records. We analyzed these 12,951 records to determine if any of the prisoners received benefits while they were incarcerated; we then extracted and analyzed the records of these prisoners. To test the accuracy of the current prisoner data provided by the counties, we selected a random sample of 240 current prisoners we had identified as having been paid SSI benefits while incarcerated (20 prisoners from each of 12 counties). We supplemented the random sample with 100 judgmentally selected cases (considering large payments to prisoners, long periods of incarceration, SSI eligibility date versus incarceration date, and other such factors). We requested that the jail systems verify (1) the booking date (the first day the prisoner was incarcerated) and (2) whether the prisoner was continuously incarcerated between the booking date and the date on which the jail created the list of inmates in its system. We requested that the jails verify the information from a source other than that used to produce the original data. The results of our random sample indicate that overall, our data were reliable. For five counties, no errors were found for the sample cases. For three counties, one case that could not be verified was found for each. For three other counties, minor errors were found in the data. For the final county, some of the information we had originally been provided was incorrect. At that time, the county had not yet entered the release dates for some prisoners into its computer system. As a result, the original information showed 123 SSI recipients in jail on November 16, 1995 (the date on which the county produced the original data), when they actually had been released before that date. We eliminated these cases from our review. Of the original 20 randomly selected cases in this county, 10 were unaffected, with the original information being correct. To obtain information on former prisoners, we asked two county systems (Wayne and Cook) to provide us with automated lists of all the prisoners released from their systems in the first 6 months of 1995. We received information on 16,821 prisoners, with no duplicate records. We processed these data through EVS, and obtained 15,998 verified SSNs. We matched the verified SSNs against the Supplemental Security Record to detect former prisoners who received SSI, and extracted and analyzed their records. The 10 SSA regions are shown in figure II.1. As discussed in appendix I, we obtained our data from county and local jail systems in 10 states—New York, Florida, Tennessee, Ohio, Illinois, Texas, Arizona, California, Washington, and Michigan—in 6 regions—II, IV, V, VI, IX, and X. In addition to those named above, the following also made important contributions to this report: Jeremy Cox, Evaluator; Mary Ellen Fleischman, Evaluator; James P. Wright, Assistant Director (Study Design and Data Analysis); and Jay Smale, Social Science Analyst (Study Design and Data Analysis). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO determined whether the Social Security Administration (SSA) is making erroneous supplemental security income (SSI) payments to prisoners in county and local jail systems. GAO found that: (1) a total of $5 million has been erroneously paid to prisoners in local and county jail systems; (2) these erroneous payments are the result of SSA field offices' inability to obtain prisoner information on a regular basis, SSI recipients' failure to report their incarceration, and SSA inability to verify recipients' eligibility for SSI; (3) the Commissioner of Social Security has sent draft legislation to Congress that would authorize payment to each correctional facility reporting newly admitted SSI beneficiaries; (4) erroneous payments to individual prisoners range from $100 to more than $17,000; (5) 136 prisoners have received more than $5,000 in erroneous SSI payments and 19 prisoners have received more than $10,000 in erroneous SSI payments; and (6) SSA is requesting its field offices to obtain prisoner information from both county and local jail systems and emphasizing the importance of monitoring field offices' compliance with this procedure. |
The South Florida ecosystem extends from the Chain of Lakes south of Orlando to the reefs southwest of the Florida Keys. This vast region, which is home to more than 6 million Americans, a huge tourism industry, and a large agricultural economy, also encompasses one of the world’s unique environmental resources—the Everglades. Before human intervention, freshwater moved south from Lake Okeechobee to Florida Bay in a broad, slow-moving sheet. The quantity and timing of the water’s flow depended on rainfall patterns and natural processes that slowly released stored water. Water stored throughout the vast area of the Everglades supplied water to wetlands and coastal bays and estuaries even during dry seasons. For centuries, the Everglades provided habitat for many species of wading birds and other native wildlife, including the American alligator, which depended on the water flow patterns that existed before human intervention. The vast Everglades wetlands were generally viewed as an unproductive swamp to be drained for more productive uses. By 1927, the Everglades Drainage District had constructed 440 miles of canals, levees, locks, and dams. However, these water management projects were not sufficient to protect over 2,000 people from drowning and many more from being injured when the waters of Lake Okeechobee overflowed during a devastating hurricane in 1928. In 1930, the Army Corps of Engineers began constructing the Herbert Hoover Dike around the lake. A major drought from the early 1930s through the mid-1940s left the booming population of South Florida short of water and threatened by uncontrollable fires in the Everglades. In 1947, torrential rains, coupled with unusually high seasonal water levels and an abnormally wet summer followed by hurricanes in September and October, flooded nearly 2.5 million acres and left 90 percent of southeastern Florida underwater. Floodwaters stood in some areas for 6 months. As a result, in 1948, the Congress authorized the Central and Southern Florida Project—an extensive system of over 1,700 miles of canals and levees and 16 major pump stations—to prevent flooding and saltwater intrusion into the aquifer, as well as to provide drainage and supply water to the residents of South Florida. Areas immediately south of Lake Okeechobee in the Everglades Agricultural Area, which was drained by the project, are now farmed—primarily by sugar growers—while the eastern part of the region has become heavily urbanized. Canals carry water away from the Everglades Agricultural Area into levied water conservation areas or directly into the Atlantic Ocean, bypassing much of the former Everglades and dramatically altering the timing, quantity, and quality of the water delivered to coastal estuaries. As figure 1 shows, these engineering changes, coupled with agricultural and industrial activities and urbanization, have reduced the Everglades to about half its original size. These changes have also had a detrimental effect on the environment. Wildlife populations have declined significantly, and some scientists believe that the reduced flow of freshwater into Florida Bay may be hastening its environmental decline. To address the deterioration of the ecosystem, the administration, in 1993, made the restoration of the Everglades and the South Florida ecosystem one of its highest environmental priorities. The South Florida Ecosystem Task Force was established by an interagency agreement to promote and facilitate the development of consistent policies, strategies, priorities, and plans for addressing the environmental concerns of the South Florida ecosystem. The Task Force consisted of assistant secretaries from the Departments of Agriculture, the Army, Commerce, and the Interior; an assistant attorney general from the Department of Justice; and an assistant administrator from the Environmental Protection Agency. The Water Resources Development Act of 1996 formalized the Task Force; expanded its membership to include state, local, and tribal representatives; and designated the Secretary of the Interior as the group’s Chairperson. To accomplish the restoration of the South Florida ecosystem, the Task Force has established the following goals: Get the water right. This means restoring more natural hydrologic functions while providing adequate water supplies and flood control. This goal will be accomplished primarily by modifying the Central and Southern Florida Project to enlarge the region’s freshwater supply and to improve how water is delivered to natural areas using a variety of technologies. More than 500 miles of canals and levees will be removed to reestablish the natural sheet flow of water through the Everglades and restore more natural water flows to South Florida’s coastal bays and estuaries. Restore and enhance the natural system. Restoring lost and altered habitats will involve acquiring land and changing current land uses as well as halting the spread of invasive, exotic species and recovering threatened and endangered species. Transform the built environment. Balancing human needs with those of the natural environment will require developing lifestyles and economies that do not have a negative impact on the natural environment and do not degrade the quality of life. This will involve ensuring that traditional industries, such as agriculture, tourism, development, fishing, and manufacturing, continue to be supported while making sure that these industries are compatible with the goals of the restoration effort and that the quality of life in urban areas is maintained or enhanced. Participants in the restoration effort include 13 federal agencies, 7 Florida agencies and commissions, 2 American Indian tribes, 16 counties, and scores of municipal governments. Representatives from the state’s major industries, the commercial and private sectors, and environmental and other special interest groups also participate in the restoration effort. Appendix II lists the federal, state, tribal, and county participants. Appendix III contains additional details on the South Florida ecosystem and the efforts undertaken to restore it. Federal funding for the South Florida Ecosystem Restoration Initiative does not come from a single source. In addition to funds appropriated directly by the Congress for projects managed by the U.S. Army Corps of Engineers and restoration activities designated in the 1996 Federal Agriculture Improvement and Reform Act (Farm Bill), the federal agencies participating in the initiative determine and allocate funds from their own appropriations. Because the agencies account for these funds independently, no complete and consolidated financial data on the initiative are available. We asked each agency to provide data on the funds provided for the initiative—appropriations from fiscal year 1993 through fiscal year 1999 and obligations and expenditures from fiscal year 1993 through fiscal year 1998 (the latest year for which complete data are available). However, many of the agencies had difficulty providing these data because although they track appropriated dollars allocated for the initiative, they do not separately track the funds obligated and expended for it. On the basis of the financial data provided by the federal agencies, we estimate that from fiscal year 1993 through fiscal year 1999, over $1.2 billion in appropriated funds has been provided to the South Florida Ecosystem Restoration Initiative. As figure 2 indicates, the funding for the initiative has increased from about $85 million for fiscal year 1993 to about $238 million for fiscal year 1999. As figure 2 also shows, 1996 was an unusual funding year because the Farm Bill included a specific appropriation of $200 million for restoration activities. Through fiscal year 1998, federal departments and agencies obligated$883 million for various restoration activities. The restoration activities can be grouped into six major categories: (1) land acquisition; (2) the management of federally owned facilities or natural resources, such as national parks, wildlife refuges, and a national marine sanctuary, which may affect or be affected by the restoration initiative; (3) science-related activities, such as mercury contaminant studies; (4) infrastructure, such as the construction of water control structures; (5) water quality and habitat protection, such as the Corps’ wetlands permitting program; and (6) information management and assessment, such as coastal mapping. As figure 3 shows, the major activities being conducted are in area/natural resources management (32 percent), land acquisition (31 percent), science (15 percent), and infrastructure (11 percent). Some of these categories, particularly area/natural resources management and science, include activities that may be considered normal agency operations and would take place with or without the South Florida Ecosystem Restoration Initiative. Area/natural resources management ($291 million) Land acquisition ($274 million) Science ($128 million) Of the $883 million obligated, $684 million was spent by the agencies or distributed to the state and other nonfederal entities for restoration activities in South Florida. As figure 4 shows, the Department of the Interior and the Corps of Engineers account for the bulk of the total federal expenditures (75 percent) during this 6-year period. The federal funding provided to date represents only a down payment. While an official cost estimate for the total restoration effort has not been made, the implementation of the Central and Southern Florida Project Comprehensive Review Study, a major component of the restoration initiative referred to as the Restudy, is estimated to cost $7.8 billion. This cost will be shared equally by the federal and state governments. The Restudy, which will propose modifications to the existing Central and Southern Florida Project, is designed to substantially increase the amount of water that is delivered to natural areas while enhancing agricultural and urban water supplies. Additional efforts will be needed to complete the restoration initiative. According to the executive director of the Task Force, at least $2 billion more will be needed to acquire additional lands, construct other infrastructure projects, and eradicate exotic plant species. Consequently, the restoration effort, which is expected to take at least 20 years to complete, could cost at least $11 billion. Appendix IV contains additional details on the federal funds appropriated, obligated, and expended for the restoration of the South Florida ecosystem. Critical to guiding an endeavor as complex as the South Florida Ecosystem Restoration Initiative is a strategic plan that outlines how the restoration will occur, identifies the resources needed to achieve it, assigns accountability for accomplishing actions, and links the strategic goals of the initiative to outcome-oriented annual goals. Such a plan for the South Florida Ecosystem Restoration Initiative has not yet been developed. In addition, although the South Florida Ecosystem Restoration Task Force is responsible for facilitating and coordinating the initiative, it is not a decision-making body. However, as our review of two integral projects indicates, the coordination efforts of the Task Force and the other groups are not always sufficient to prevent schedule delays and cost overruns. Unless these issues are resolved, there is little assurance that the initiative will stay on track and be accomplished in a timely and efficient manner. While the Task Force has published several documents and is in the process of developing other strategies and plans to address specific restoration issues, it has not yet developed an overall strategic plan to guide the restoration effort. The benefits of having a strategic plan are many. A strategic plan contains goals and a strategy for achieving these goals, providing focus and direction and a benchmark for measuring performance. Such a plan also triggers a reassessment if progress in achieving the goals is not satisfactory. In addition, a strategic plan establishes priorities and time frames for accomplishing results by identifying the steps and resources necessary to achieve the goals, appropriate milestones, and ways to track or measure progress annually. Measurable goals also provide the Congress, the state of Florida, and the other participants with a sense of what can be achieved with the level of resources committed. The Task Force has published several documents —An Integrated Plan for South Florida Ecosystem Restoration and Sustainability: Success in the Making, The Annual Interagency Cross-Cut Budget, the Integrated Financial Plan, and annual reports—that provide information on the restoration activities of the participating agencies. These documents contain some of the components of a strategic plan; however, none, taken either separately or together, contains all the components needed. This document, published in April 1998, is intended to be an integrated plan for restoring and sustaining the South Florida ecosystem. Success in the Making identifies three restoration goals. The first goal is to restore more natural hydrologic functions while providing adequate water and flood control. The goal is to deliver the right amount of water, of the right quality, to the right places, at the right times. The second goal—to restore and enhance the natural system—centers on restoring habitats and recovering threatened and endangered species. The third goal—to transform the built environment—requires the development of sustainable lifestyles and economies that do not negatively affect the natural environment. Success in the Making also describes the strategies—adaptive management and innovative management—that the Task Force and its partners have adopted to achieve these long-term goals. However, the goals are not expressed in quantitative or measurable terms that would allow the Task Force to assess whether they have been achieved or how they need to be revised. The strategies presented do not outline how the goals are to be achieved or identify the resources required. In addition, Success in the Making does not describe how annual goals will be used to gauge progress. This document packages under one cover the justifications for participating organizations’ funding requests for restoring the South Florida ecosystem. The document includes a brief narrative describing the intended uses of the funds being requested. However, the document does not link the requests for resources to specific strategic or annual goals. While it includes a budget matrix showing the dollars appropriated to the participating agencies by functional area and fiscal year, this information is not always consistent with the appropriations data provided by the individual agencies. Under the Water Resources Development Act of 1996, the Task Force is required to prepare an integrated financial plan and recommendations for a coordinated budget request. This plan, which is prepared annually and is designed to facilitate budget development and eliminate duplication of effort, compiles descriptions of restoration projects. The plan is intended to provide information on each project’s total estimated costs, starting and ending date, and appropriations to date and to identify the agencies involved in the project. However, the plan does not include all of the projects being undertaken by the participating agencies and does not provide consistent information on the total costs of the projects, the agencies responsible for funding the projects, or the sources and amounts appropriated to date. In addition, the information provided on the appropriations to date does not always match the appropriation data contained in the Cross-Cut Budget. Furthermore, although the plan provides information on the starting dates of projects, the plan is organized on a subregional basis and the identification numbers assigned to specific projects have changed from year to year, making it difficult to determine which projects are scheduled to begin in a particular year. Finally, the plan does not link the projects to the strategic goals outlined in Success in the Making. While the Task Force is not required to publish these reports, its Florida-based working group has published an annual report since 1994. These reports summarize the previous years’ accomplishments and set goals for the next year. However, because the format and organization of the reports vary from year to year, it is not possible to match the goals set in one year with the accomplishments reported in the following year. Furthermore, the accomplishments cited are not tied to the strategic goals presented in Success in the Making or to specific projects listed in the Integrated Financial Plan, making it difficult to use these reports to evaluate or track the progress made in the restoration initiative. According to federal and state officials we spoke with, these documents provide general information on the initiative and are good reference documents. However, none of the officials thought that the documents were useful as management or tracking tools. In addition to these documents, various strategies or plans are being developed to address specific issues facing the initiative. For example, the Corps has developed the Restudy, which determines the modifications to the Central and Southern Florida Project needed to restore the ecosystem while still providing water and flood control to urban and agricultural sectors. At the same time, the U.S. Fish and Wildlife Service has drafted a multispecies recovery plan to address the recovery of the 68 federally listed threatened or endangered species located in South Florida. In addition, the Environmental Protection Agency and Florida’s Department of Environmental Protection recently began to develop a comprehensive water quality protection plan for the South Florida ecosystem. The working group is also developing an Integrated Strategic Plan, which will include a common vision for all the participants and strategies to measure their success in achieving this vision. However, according to our conversations with the project leader, this plan, which will not be complete until 2001, will not include all the components of an overall strategic plan. Several agency officials and others whom we spoke with during our review agreed that a strategic plan that integrated these plans and other activities proposed by the participating agencies into a “blueprint” for accomplishing the initiative would be very helpful and useful. Such a plan would also allow the agencies and the Congress to evaluate the progress being made and to assess whether the goals of the initiative are being achieved. Coordination Has Not Prevented Schedule Delays and Cost Overruns Restoring an ecosystem as vast and complex as the South Florida ecosystem will require extraordinary cooperation. The South Florida Ecosystem Restoration Task Force, established to coordinate the development of consistent policies, strategies, plans, programs, and priorities, is the first partnership of its kind and coordinates restoration activities with federal, state, and local agencies, affected tribes, and the general public. Coordination among these parties is achieved, in large part, through the Task Force’s Florida-based working group, composed of top-level managers in Florida from the organizations represented on the Task Force. The working group holds monthly meetings that are open to the public to discuss issues affecting the restoration of the ecosystem. The Task Force also uses various advisory boards, such as the Governor’s Commission for a Sustainable South Florida, which represents a wide variety of public and private interests, and technical working groups, such as the Science Coordination team, to increase the agencies’ sharing of information on restoration projects and programs. In addition, several other outside groups have been established to coordinate and address project-specific issues. Several officials cited the development of the Restudy and its proposed implementation plan by a multidisciplinary team composed of 160 specialists from 30 state, federal, regional, local, and tribal governments as an example of increased coordination. However, the Task Force is a coordination body, not a decision-making body. Our review indicates that even with the coordination efforts of the Task Force and the other groups, two ongoing infrastructure projects that are integral to the restoration effort are taking longer and costing more than planned. Both the Modified Water Deliveries project and the Everglades National Park-South Dade Conveyance Canals (C-111) project are more than 2 years behind schedule and together could cost about $80 million more to complete than originally estimated, in part because the agencies involved have not been able to agree on components of the projects. These projects are intended to restore the natural hydrologic conditions in Everglades National Park. Our review of these projects indicates that the federal and state agencies involved are unable to agree on components of these projects, such as the lands to be acquired and the schedules for operating water pump stations. The Modified Water Deliveries project, authorized by the Everglades National Park Protection and Expansion Act of 1989, is intended to restore the natural hydrologic conditions in Shark River Slough and Everglades National Park. One of the problems associated with this project has been the inability of the participating agencies to reach agreement and make a decision on acquiring the 8.5 Square Mile Area, a residential area in the East Everglades. Originally, the Corps of Engineers, in consultation with Everglades National Park, completed a plan to protect the residents within the 8.5 Square Mile Area, a section in the East Everglades, from further flooding as a result of the project. The Superintendent of Everglades National Park, however, concluded that the plan did not represent a workable solution, and the Corps of Engineers suspended further planning and design of the plan in 1994. A decision on how to resolve the 8.5 Square Mile Area issue was not made until 1998. With the support of the National Park Service, the local project sponsor recommended the complete acquisition of the area, rather than the original flood protection plan, at an additional federal cost of about $22 million. This decision, however, faces a number of challenges before it can be implemented, including the completion of a supplemental environmental impact statement by the Corps of Engineers, congressional approval, and opposition from an affected Indian tribe. These challenges may delay the acquisition of the area and, ultimately, the completion of the project. The C-111 project is intended to restore freshwater flows to Taylor Slough and Everglades National Park and provide flood protection and other benefits to South Dade County. Problems with this project have been the inability to resolve disagreements among agencies and private interests and to acquire needed land in a timely manner. One of the project’s water pump stations was constructed on an expedited schedule to provide immediate environmental benefits to the national park. In December 1997, the Corps of Engineers completed the pump’s construction. However, as of March 1999, or 15 months after its completion, this pump has not been operated because Everglades National Park and agricultural interests have not been able to agree on an operating schedule. In addition, the National Park Service has not yet acquired lands needed for the operation of the pump. As early as May 1996, the Corps of Engineers notified the National Park Service that these lands were necessary to operate the pump. In 1999, almost 3 years later, the National Park Service made funds available for the condemnation of these lands. Federal officials attributed the delay in acquiring these lands to insufficient funds and staff needed to complete the land acquisition process. Federal and state officials told us that the agencies involved in the restoration effort have multipurpose missions that differ and sometimes conflict. For example, both the Corps of Engineers and the South Florida Water Management District are responsible for supplying water, controlling flooding, and restoring natural resources. The mission of the Department of the Interior’s National Park Service, however, is to preserve unimpaired the natural and cultural resources of the national parks. The inability to resolve disagreements and acquire land in a timely manner has kept Everglades National Park from achieving the anticipated environmental benefits of the C-111 project. Agency officials noted that the C-111 and the Modified Water Deliveries projects are at critical junctures. If the participating agencies cannot resolve their disagreements, the success of these projects may be jeopardized. In addition, agency officials have commented that without some entity or group with overall management responsibility and authority to resolve differences, problems such as those encountered in implementing these two projects could continue to hinder the initiative. Appendix V contains a more detailed description of these two projects and the issues that the agencies cannot agree upon. Restoring the South Florida ecosystem is a complex, long-term effort involving federal, state, local, and tribal entities, as well as public and private interests. The South Florida Ecosystem Task Force, a multiagency group with federal, state, local and tribal representatives, was created to coordinate and facilitate the overall restoration effort. However, a strategic plan has not yet been developed that clearly lays out how the initiative will be accomplished and includes quantifiable goals and performance measures that can be used to track the initiative’s progress. In addition, although the Task Force and other groups have improved coordination, our review of two integral projects indicates that coordination does not always achieve consensus and there are times when management decisions are necessary to prevent schedule delays and cost overruns. However, because the Task Force is a coordinating body, not a decision-making body, it is limited in its ability to manage and be accountable for the overall restoration effort. Given the scope and complexity of the initiative and the difficulties already being encountered, unless a strategic or master plan is developed to guide the restoration effort and a mechanism is developed to provide the authority needed to make management decisions, the ability to accomplish the initiative in a timely and efficient manner is at risk. To ensure that the South Florida ecosystem is restored in a timely and efficient manner, we recommend that the Secretary of the Interior, as the Chairperson of the South Florida Ecosystem Restoration Task Force, in conjunction with the other members of the Task Force, develop a strategic plan that will (1) outline how the restoration of the South Florida ecosystem will occur, (2) identify the resources needed to achieve the restoration, (3) assign accountability for accomplishing actions, and (4) link the strategic goals established by the Task Force to outcome-oriented annual goals and work with the organizations and entities participating in the restoration effort to develop and agree upon a decision-making process to resolve conflicts in order to accomplish the initiative in a timely and efficient manner. We provided a copy of this report to the departments of Agriculture, the Army, Commerce, and the Interior; the Environmental Protection Agency; and the South Florida Water Management District for review and comment. The Department of the Interior provided written comments on behalf of the departments of Agriculture, the Army, Commerce, and the Interior and of the Environmental Protection Agency. The agencies agreed with the importance of strategic planning but stated that our report fails to adequately acknowledge the substantial planning efforts that have already taken place and are ongoing. The agencies pointed out that the Task Force is in the process of developing a plan much like the one called for in our recommendation. The agencies believe that our recommendation—to work with the organizations and entities participating in the restoration effort to develop and agree upon a decision-making process to resolve conflicts—is unrealistic, given the large number of federal, state, tribal, and local governments and agencies involved, and may be of questionable legality, given each agency’s statutory responsibilities and authorities. In addition, the agencies noted that the report focuses only on the federal efforts and ignores the state’s substantial efforts. The agencies also strongly disagreed with our conclusion that additional delays and cost overruns are likely to occur in the future and that the ability to accomplish the initiative’s overall goals is at risk. The agencies further believe that we oversimplified the causes of the delays for the two projects discussed in the report. Finally, the agencies provided some technical clarifications to the report, which we incorporated where appropriate. We are encouraged that the agencies recognize the value of and need to have a strategic plan. Our report discusses and describes in some detail the documents published by the Task Force that provide information on the restoration effort, including the goals, activities, and accomplishments of the agencies. In addition, while we do not list—nor did we intend to list— all of the various plans and strategies developed by the agencies involved in the restoration effort, we do specifically mention key planning efforts undertaken. However, as we point out in our report, an overall strategic plan that integrates all of the Task Force’s various documents and planning efforts has not yet been developed. Although the Task Force has begun to develop an Integrated Strategic Plan, which the agencies say will be much like the one our report recommends, this plan is not expected to be complete until 2001. Furthermore, on the basis of our conversations with the project leader responsible for developing the plan, we do not believe that it will include all the necessary components of an overall strategic plan called for in the report. The agencies disagreed with our recommendation to develop a decision-making process to resolve conflicts because they believe that the creation of an entity to resolve conflicts would infringe upon the sovereign responsibilities of the governments and agencies involved in the effort and would, therefore, be of questionable legality and impractical. Our recommendation does not envision the creation of another body to decide conflicts or issues among the participants in the restoration of the South Florida ecosystem. Rather, we believe that a process for resolving conflicts needs to be established within the existing legal authorities and structures. Because we recognized that the restoration effort involves federal, state, tribal, and local governments and entities that have various missions and authorities, our recommendation was that the Task Force’s members work with the organizations and entities involved in the restoration effort to develop and agree upon a decision-making process to resolve conflicts in order to accomplish the initiative in a timely and efficient manner. Furthermore, in its written comments, the South Florida Water Management District, a key player and member of the Task Force, stated that the development and implementation of a conflict resolution process is very workable and would benefit the restoration effort, provided that it did not conflict with the sovereign rights of the entities involved and the decision-making authorities of the agencies. Without some means to resolve agencies’ disagreements and conflicts in a timely manner, problems such as those encountered in implementing the projects we reviewed could continue to hinder the initiative. While the agencies commented that our report focuses only on federal restoration efforts, appendix III includes information on key legislative and administrative actions taken by both the federal government and the state of Florida to restore the South Florida ecosystem. For example, the report cites the state’s establishment of the “Save Our Everglades” program in 1983, passage of the Everglades Forever Act in 1994, and establishment of the Governor’s Commission for a Sustainable South Florida in 1994. Although the agencies strongly disagreed with our conclusion that additional delays and cost overruns are likely in the future, we believe that the two projects we reviewed are similar to those that will be conducted in the future and that similar disagreements may occur. As stated in the report, without some means to resolve these disagreements in a timely manner, problems such as those encountered in implementing the two projects could continue to hinder the initiative. In addition, we believe that the report accurately presents areas of disagreement or conflicts that are affecting these two projects. Furthermore, the South Florida Water Management District, the local sponsor for both of these projects, described our characterization of the issues relating to these projects as accurate. The District agreed with the report that these two projects are at critical junctures requiring the expeditious resolution of the outstanding issues. The consolidated response of the federal agencies is presented in its entirety, together with our responses, in appendix VI. In written comments on a draft of this report, the South Florida Water Management District agreed with our recommendation to develop a decision-making process to resolve conflicts. The District stated that the development and implementation of a conflict resolution process was very workable and would benefit the restoration effort as long as it did not conflict with the sovereign rights of the entities involved and did not relinquish the decision-making authority of the entity that is responsible for making the final decision. The District also described our characterization of the issues relating to the two projects discussed in the report as accurate. Without commenting specifically on our recommendation to develop an overall strategic plan, the District stated that it would be helpful if our report contained specific recommendations on how to improve the Task Force’s ongoing strategic planning process. In addition, the District believed that readers of our report would benefit if we included information on (1) the key restoration accomplishments of the state agencies and the Florida legislature in protecting the natural system, (2) some of the positive outcomes of coordination and collaboration by the participants in the restoration effort, and (3) the financial contributions of the state of Florida to the restoration effort. We believe that our recommendation sufficiently addresses the major elements that should be included in an overall strategic plan for the restoration effort. These include (1) outlining how the restoration will occur, (2) identifying the resources needed to achieve the restoration, (3) assigning accountability for accomplishing actions, and (4) linking the strategic goals established by the Task Force to outcome-oriented annual goals. We do not believe that we should prescribe more than is contained in our recommendation. Rather, the Secretary of the Interior as Chair of the Task Force, in conjunction with the other Task Force members, should have the flexibility needed to successfully develop the strategic plan. Because appendix III of the report contains information on the key legislative and administrative actions taken by both the federal government and the state of Florida to restore the ecosystem, we did not include additional information on the state’s accomplishments. However, we added a statement to the report highlighting some of the positive outcomes of increased coordination among the stakeholders. We also agree that it is important to recognize the state’s financial contributions to the restoration effort and have included this information in our report. In addition, our report points out that the costs of one of the major components of the effort—the $7.8 billion Restudy—will be shared equally by the federal and state governments. The report also states that the federal and state governments have entered into several agreements to share the cost of land acquisition. The South Florida Water Management District’s comments are presented in their entirety, together with our responses, in appendix VII. To determine how much and for what purposes federal funding was appropriated, obligated and expended for the restoration of the South Florida ecosystem from fiscal year 1993 through fiscal year 1999, we contacted officials from the South Florida Ecosystem Restoration Task Force’s Office of the Executive Director. We also reviewed various budgetary documents, such as the Task Force’s Annual Interagency Cross-Cut Budget for 1999 and Integrated Financial Plan for 1998. However, because the Task Force does not track obligations and expenditures and no consolidated financial information exists, we contacted both headquarters and field officials from the U.S. Army’s Corps of Engineers; the departments of Agriculture, Commerce, and the Interior; and the Environmental Protection Agency to obtain this information. We contacted these agencies because they were the primary federal agencies participating in the restoration initiative. We reviewed the information provided by these agencies but did not independently verify its reliability or trace it to the systems from which it came. We did not verify the completeness or accuracy of the data because such an effort would have required a significant investment of time and resources. However, we did attempt to reconcile inconsistencies in the data provided by the agencies. To determine how the initiative is being coordinated and managed and what other issues may impede its progress, we interviewed officials from the South Florida Ecosystem Restoration Task Force’s Florida-based working group, including representatives of the federal agencies involved in the restoration initiative, the South Florida Water Management District, and the Miami-Dade County Department of Environmental Resources Management. We also met with the chair of the South Florida Ecosystem Task Force, the executive director of the South Florida Ecosystem Task Force, the chair of the working group, the executive director of the Southern Everglades Restoration Alliance, the executive director of the South Florida Water Management District, and the counselor to the Assistant Secretary for Fish and Wildlife and Parks. In addition, we met with representatives of the Miccosukee Tribe, the National Audubon Society, and the Tropical Audubon Society, as well as the director of the Southeast Environmental Research Program at Florida International University. Because the initiative is just beginning, we reviewed two ongoing infrastructure projects integral to the restoration effort to assess how well the effort was being coordinated and managed. In addition, we reviewed applicable laws and regulations, reports, plans, and other documents relevant to the restoration effort. We conducted our review from September 1998 through April 1999 in accordance with generally accepted government auditing standards. We are providing copies of this report to the Honorable Dan Glickman, Secretary of Agriculture; the Honorable William M. Daley, Secretary of Commerce; the Honorable William S. Cohen, Secretary of Defense; the Honorable Bruce Babbitt, Secretary of the Interior; the Honorable Carol Browner, Administrator of the Environmental Protection Agency; and other interested parties. We will also make copies available to others upon request. If you or your staff have any questions, please call me at (202) 512-3841. Major contributors to this report are listed in appendix VIII. | Pursuant to a congressional request, GAO reviewed the South Florida Ecosystem Restoration Initiative, focusing on: (1) how much and for what purposes federal funding was provided for the restoration of the South Florida ecosystem from fiscal year (FY) 1993 through FY 1999; and (2) how well the restoration effort is being coordinated and managed. GAO noted that: (1) on the basis of the data GAO obtained from the 5 primary federal departments and agencies participating in the initiative, GAO estimates that over $1.2 billion in federal funds was provided from FY 1993 through FY 1999; (2) the key restoration activities undertaken by the federal agencies were: (a) land acquisition; (b) the management of federally-owned facilities or natural resources, and a national marine sanctuary; (c) infrastructure projects; and (d) science-related activities; (3) over 75 percent of the federal expenditures during this 6-year period have been made by agencies within the Department of the Interior and by the U.S. Army Corps of Engineers; (4) the federal funding provided to date represents only a down payment; (5) while no official cost projection for the total restoration effort has been made, a major component, the implementation of the Central and Southern Florida Project Comprehensive Review Study, referred to as the Restudy, is estimated to cost an additional $7.8 billion; (6) the Restudy is designed to substantially increase the amount of water that is delivered to natural areas while enhancing agricultural and urban water supplies; (7) according to the South Florida Ecosystem Restoration Task Force's executive director, at least $2 billion beyond the $7.8 billion will be needed to complete the restoration effort; (8) this money will be used to acquire additional lands, construct other infrastructure projects, and eradicate exotic plant species; (9) the Task Force is responsible for coordinating the participating entities' implementation of the initiative; (10) however, a strategic plan that clearly lays out how the initiative will be accomplished and includes quantifiable goals and performance measures has not yet been developed; (11) the Task Force is a coordinating body, not a decisionmaking body, and thus is limited in its ability to manage and make decisions for the overall restoration effort; (12) as GAO's review of two projects integral to the restoration effort indicates, even with coordination, the federal and state agencies involved are unable to agree on components of these projects; (13) their inability to agree has contributed to delays and cost overruns; and (14) given the scope and complexity of the initiative and the difficulties that have already been encountered, additional delays and cost overruns are likely to occur, and the participants' ability to accomplish the initiative's overall goals is at risk. |
OMB plays a key role in overseeing how federal agencies manage their investments by working with them to plan, justify, and determine how to best manage their IT projects. Each year, OMB and federal agencies work together to determine how much the government plans to spend on IT projects and how these funds are to be allocated. Over the last two decades, Congress has enacted several laws to assist agencies and the federal government in managing IT investments. Three key laws are the Paperwork Reduction Act of 1995, the Clinger-Cohen Act of 1996, and the E-Government Act of 2002: The Paperwork Reduction Act of 1995—The act specifies OMB and agency responsibilities for managing information resources, including the management of information technology. Among its provisions, this law establishes agency responsibility for maximizing the value and assessing and managing the risks of major information systems initiatives. It also requires that OMB develop and oversee policies, principles, standards, and guidelines for federal agency information technology functions, including periodic evaluations of major information systems. The Clinger-Cohen Act of 1996—The act places responsibility for managing investments with the heads of agencies and establishes chief information officers (CIO) to advise and assist agency heads in carrying out this responsibility. Additionally, this law requires OMB to establish processes to analyze, track, and evaluate the risks and results of major capital investments in information systems made by federal agencies and report to Congress on the net program performance benefits achieved as a result of these investments. The E-Government Act of 2002—The act establishes a federal e- government initiative, which encourages the use of Web-based Internet applications to enhance the access to and delivery of government information and service to citizens, to business partners, to employees, and among agencies at all levels of government. The act also requires OMB to report annually to Congress on the status of e-government initiatives. In these reports, OMB is to describe the administration’s use of e-government principles to improve government performance and the delivery of information and services to the public. OMB subsequently began several initiatives to help fulfill these responsibilities: In February 2002, OMB established the Federal Enterprise Architecture (FEA) program. According to OMB, the FEA is intended to facilitate governmentwide improvement through cross-agency analysis and identification of duplicative investments, gaps, and opportunities for collaboration, interoperability, and integration within and across agency programs. The FEA is composed of five “reference models” describing the federal government’s (1) business (or mission) processes and functions, independent of the agencies that perform them; (2) performance goals and outcome measures; (3) means of service delivery; (4) information and data definitions; and (5) technology standards. The reference models are intended to inform agency efforts to develop their agency-specific enterprise architectures and enable agencies to ensure that their proposed investments are not duplicative with those of other agencies and to pursue, where appropriate, joint projects. In April 2003, OMB established the Office of E-Government to promote better use of the Internet and other information technologies to improve government services for citizens, internal government operations, and opportunities for citizen participation in government. In recent years, OMB e-government initiatives have fostered the establishment of centralized systems across the government. Key efforts target electronically filing annual tax returns, providing a one- stop portal for emergency response information, developing a governmentwide electronic travel system, and consolidating the number of payroll systems to a small number of providers. In March 2004, OMB established multiple “Line of Business” (LOB) initiatives to consolidate redundant IT investments and business processes across the federal government in areas including case management, grants management, human resources management, federal health architecture, information systems security, budget formulation and execution, geospatial information, financial management, and IT infrastructure. Each LOB initiative is led by an individual agency and supported by other relevant agencies. One of the initiatives’ goals is to reduce costs governmentwide through consolidation and standardization, and OMB reports to Congress each year on the costs and benefits of these initiatives. OMB officials explained that the current administration continues to support these LOB initiatives. OMB uses several data collection mechanisms to oversee federal IT spending during the annual budget formulation process. Specifically, OMB requires 26 key federal departments and agencies (agencies) to provide information related to their IT investments, including agency IT investment portfolios (called exhibit 53s) and capital asset plans and business cases (called exhibit 300s). The 26 federal agencies are listed in table 1 and the exhibits are described below. Exhibit 53. The purpose of the exhibit 53 is to identify all IT investments—both major and nonmajor—and their associated costs within a federal organization. Information included on agency exhibit 53s is designed, in part, to help OMB better understand what agencies are spending on IT investments. The information also supports cost analyses prescribed by the Clinger-Cohen Act. As part of the annual budget, OMB publishes a report on IT spending for the federal government representing a compilation of exhibit 53 data submitted by the 26 agencies. Exhibit 300. The purpose of the exhibit 300s is to provide a business case for each major IT investment and to allow OMB to monitor IT investments once they are funded. Agencies are required to provide information on each major investment’s cost, schedule, and performance. To help carry out its oversight role and assist the agencies in carrying out their responsibilities as assigned by the Clinger-Cohen Act, OMB developed a Management Watch List in 2003. This list included mission- critical projects that needed improvements in performance measures, project management, IT security, or their overall justification. Further, in August 2005, OMB established a High-Risk List, which consisted of projects identified by federal agencies, with the assistance of OMB, as requiring special attention from oversight authorities and the highest levels of agency management. More recently, in June 2009, to further improve the transparency into and oversight of agencies’ IT investments, OMB publicly deployed a website, known as the IT Dashboard, which replaced its Management Watch List and High-Risk List. The Dashboard displays information on the cost, schedule, and performance of 828 major federal IT investments at 26 federal agencies. In addition, the Dashboard allows users to download exhibit 53 data, which includes information on both major and nonmajor investments. According to OMB, these data are intended to provide a near real-time perspective of the performance of these investments, as well as a historical perspective. Further, the public display of these data are intended to allow OMB, other oversight bodies, and the general public to hold the government agencies accountable for results and progress. According to OMB officials, the agency’s analysts use the IT Dashboard to identify IT investments that are experiencing performance problems and to select them for a TechStat session—a review of selected IT investments between OMB and agency leadership that is led by the Federal CIO. As of December 2010, OMB had held 58 of these sessions. Further, OMB officials told us that, in mid-2011, TechStat reviews began to occur at the agency level, and as of September 2011, each of the agencies that participated in the IT Dashboard held agency-level TechStat meetings. According to OMB, these sessions have enabled the government to improve or terminate IT investments that are experiencing performance problems. Over the last 5 years, we have issued several reports recommending improvements to the reliability of both the exhibit 300s and the IT Dashboard. In January 2006, we issued a report on the accuracy and reliability of agencies’ exhibit 300s. We found that underlying support for the information in the exhibit 300 was often inadequate. Specifically, we reported that the exhibit 300s had three types of weaknesses: (1) underlying documentation either did not exist or disagreed with the exhibit 300, (2) agencies did not always demonstrate that they complied with federal or departmental requirements or policies with regard to management and reporting processes, and (3) cost data were generally unreliable. We recommended that OMB direct agencies to identify and disclose weaknesses in data accuracy and reliability. We also recommended that OMB develop more explicit guidance for the exhibit 300s and provide training to agency personnel for completing exhibit 300s. In response, OMB issued guidance directing agencies to ensure that they are complying with OMB guidance on information quality, modified exhibit 300 guidance to make it more explicit in certain sections, and provided training to agencies on how to complete their exhibit 300s. More recently, we issued two reports on the IT Dashboard. In July 2010 we reported that the Dashboard had increased the transparency and oversight of federal IT investments; however, the cost and schedule ratings on the Dashboard were not always accurate for selected investments. Specifically, of the eight investments selected for review, we found that four had notable discrepancies on either their cost or schedule ratings. We noted that a primary reason for the data inaccuracies was that while the Dashboard was intended to represent near real-time performance information, the cost and schedule ratings did not take into consideration current performance. As a result, the ratings were based on outdated information. Another issue with the ratings was the wide variation in the number of milestones agencies reported, which was partly because OMB’s guidance to agencies was too general. We recommended that OMB report on its planned changes to the Dashboard to improve the accuracy of performance information and provide guidance to agencies that standardizes milestone reporting. OMB agreed with our recommendations and initiated work to address them. Subsequently, in March 2011, we reported that OMB had initiated several efforts to increase the Dashboard’s value as an oversight tool, and had used the Dashboard’s data to improve federal IT management. These efforts include streamlining key OMB investment reporting tools, eliminating manual monthly submissions, coordinating with agencies to improve data, and improving the Dashboard’s user interface. However, we also noted that while the efforts contributed to data quality improvements, performance data inaccuracies remained. The ratings of selected IT investments on the Dashboard did not always accurately reflect current performance, which is counter to the website’s purpose of reporting near real-time performance. Specifically, we found that cost ratings were inaccurate for 6 of the 10 investments that we reviewed, and schedule ratings were inaccurate for 9. These inaccuracies can be attributed to weaknesses in how agencies report data to the Dashboard, such as providing erroneous data submissions, as well as limitations in how OMB calculates the ratings. Accordingly, we recommended that heads of each of the five selected agencies with inaccurate ratings take steps to improve the accuracy and reliability of Dashboard information and OMB improve how it rates investments relative to current performance and schedule variance. In response, four of the selected agencies agreed with our recommendation, and one agreed to consider it. OMB agreed with our recommendation to update the schedule calculation, and stated that the agency has long-term plans to update the Dashboard’s calculations. As of July 2011, the 26 federal agencies that submit information to the IT Dashboard planned to spend about $78.8 billion on 7,248 IT investments in fiscal year 2011. DOD reported the most planned spending in IT investments (at $37.1 billion for 2,414 investments), followed by HHS (at $7 billion for 706 investments), and DHS (at almost $6 billion for 402 investments). Figure 1 shows the planned spending, in millions, on IT investments by federal agency. Appendix II provides more information on selected agencies’ IT investments. When providing IT investment information to OMB, federal agencies designate investments as major or nonmajor IT investments and identify whether expenditures are for new development or for ongoing operation and maintenance (O&M). Of the planned fiscal year 2011 expenditures listed on the IT Dashboard, major IT investments account for about $40.2 billion and nonmajor investments account for about $38.4 billion. Looked at another way, federal agencies plan to spend approximately $24.7 billion on development activities and about $54 billion on O&M. Figure 2 provides a visual summary of the relative cost of investments that are major and nonmajor investments, and that are in development and O&M. OMB often refers to the federal government’s approximately $79 billion annual investment in IT; however, the Dashboard does not provide data for all federal agencies. While the IT Dashboard provides IT investment information for 26 federal agencies, it does not include any information about 61 other agencies’ investments. Specifically, the Dashboard presents information from 15 federal departments, 10 independent agencies, and 1 other agency. It does not include information from 58 independent executive branch agencies (such as the Securities and Exchange Commission, the Central Intelligence Agency, and the Federal Communications Commission) and 3 other agencies (such as the Legal Services Corporation). It also does not include information from the legislative or judicial branch agencies. Table 2 summarizes the executive branch agencies that are included and excluded from the Dashboard. According to OMB, the agencies on the Dashboard are those that have historically been involved in the annual capital planning process. While OMB encourages smaller agencies to use the Dashboard, most of these agencies choose not to. Accordingly, estimates of these agencies’ IT investments are not included in the $79 billion spending figure. When agencies develop their annual exhibit 53s, they are required to categorize each investment according to a primary function identified in the FEA reference models. For the fiscal year 2010 submissions, agencies were asked to select a primary function from categories within the FEA business or service reference models—several of which have similar titles. The primary functions identified in both of these models are listed in table 3. In their fiscal year 2011 submissions, agencies reported the greatest number of IT investments in the information and technology management category (1,536 investments), followed by supply chain management (781 investments), human resources management (661 investments), and financial management (580 investments). Similarly, planned expenditures on investments were greatest in the information and technology management category, at about $35.5 billion. Figure 3 depicts the total number of investments governmentwide by agency-identified primary function. This information can also be analyzed to determine the number of investments for each agency in each category. For example, within the information and technology management category, DOD has the greatest number of investments, at 487. Following are the Departments of Energy, with 172 investments, and Justice, with 135 investments. Figure 4 provides a visual representation of the number and cost of investments in the information and technology management category. Figure 5 shows the number of investments developed by federal agencies (excluding DOD) in the information and technology management category. Appendix III provides similar charts for three other functional areas: supply chain management, human resources management, and financial management. The guidance that OMB provides to agencies on how to report on their IT investments does not ensure complete reporting or fully facilitate the identification of duplicative investments. Specifically, OMB’s definition of an IT investment is broad, and agencies interpret it in different ways. The 10 agencies we evaluated differed on what systems they include as IT investments. For example, 5 agencies reported that they include all research and development systems, and 5 do not. As a result, not all IT investments are included in the federal government’s estimate of annual IT spending. In addition, OMB’s guidance to federal agencies on how to categorize their investments requires them to map each investment to a single primary function. This limits OMB’s ability to identify potentially duplicative investments both within and across agencies because similar investments may be organized under different functions. In its annual request for agencies to report on their IT investments using the exhibit 53, OMB uses the definition of IT from the Clinger-Cohen Act of 1996. Both the act and OMB’s guidance define IT as any equipment used in the automatic acquisition, storage, analysis, evaluation, manipulation, management, movement, control, display, switching, interchange, transmission, or reception of data or information. The exhibit 53 requires agencies to provide, among other things, a description, cost information, and FEA function for each IT investment in the agency’s portfolio. After agencies submit an initial draft of the exhibit 53, OMB reviews the draft and then provides an evaluation, including any areas requiring remediation. Through this process, agencies work with OMB to determine which major and nonmajor investments will be reported in the President’s budget. However, OMB officials reported that they have given agencies the flexibility to determine what to include as an IT investment, and agencies have chosen to interpret the definition of IT in different ways. Specifically, in implementing OMB’s guidance, 6 of the 10 agencies we evaluated exclude systems that fit the definition of an IT investment. One case involves space systems. Both NASA and Commerce include a spacecraft’s ground systems (such as satellite command-and-control systems and satellite data-processing systems) in their exhibit 53s. However, neither agency includes the technology components on the spacecraft itself—including instruments, computers, and transponders— even though these components acquire, manage, and transmit data. As a result, these investments are not included in the annual exhibit 53 submissions. For example, in its fiscal year 2011 exhibit 53 submission, Commerce’s National Oceanic and Atmospheric Administration (NOAA) included only $215.75 million of the $690.6 million budgeted for its Geostationary Operational Environmental Satellite-R series and only $181 million of the $382.3 million budgeted for its Joint Polar Satellite System. Thus, at least $676 million in IT-related development was not included on the IT Dashboard for those two systems. Further, NASA’s reported $1.8 billion in IT investments comprises a very small portion of its over $68 billion portfolio of major space-related projects. In another case, five agencies—the Departments of Transportation, Commerce, Health and Human Services, Agriculture, and Homeland Security—stated that they do not always include systems that are in research and development as IT investments. For example, the Federal Railroad Administration (within the Department of Transportation) includes three research and development systems in its exhibit 53, but does not include others, such as the Positive Train Control system. This system is meant to integrate command, control, communications, and information systems for controlling train movements at a cost of about $27 million (as of 2008). Because agencies choose to exclude certain systems or categories of systems when they report to OMB on their IT investments, key costs are not included in OMB’s estimate of annual spending on federal IT investments. OMB officials acknowledge that agencies are able to interpret the definition of IT in different ways, but stated that they want to provide agencies some flexibility in deciding what they report on. Until OMB clarifies and enforces its requirement that agencies should be reporting on all IT investments, selected IT investments will not be subjected to the enhanced oversight, and OMB’s estimates of federal IT investments will be significantly understated. OMB’s guidance to federal agencies on how to categorize IT investments allows for analysis of investments with similar functions; however, it does not go far enough to allow identification of potentially duplicative investments. According to OMB guidance, each investment needs to be mapped to a single functional category within the FEA. This feature allows the identification and analysis of potentially duplicative investments across agencies. However, IT investments could fit into more than one category. For example, an agency could identify an inventory system as a financial management system or a supply chain management system. Thus, if an organization planned to develop an inventory system and searched for potentially duplicative investments in a group labeled as financial management systems, it would miss seeing potentially duplicative systems categorized as supply chain management systems. We recently reported on a DOD financial management system that was identified in a different functional category—supply chain management. We noted that because DOD had categorized the system as supply chain management, the cost of this system was not included in OMB’s estimate for financial management systems. Thus, we recommended that OMB take actions to facilitate accurate reporting of spending on financial management systems. As another example, an agency seeking to develop a wildfire management system would likely assess whether there is a similar system listed in the category of disaster preparedness; however, the agency would miss seeing an investment by the Department of Interior for a wildfire management system because it was grouped in the information management and technology category. OMB officials acknowledged that there may be limitations in allowing agencies to choose only one descriptive category but noted that agencies can provide additional information on other applicable functions in their supplementary descriptions. However, searching through supplementary material is more labor-intensive than simply searching on primary and secondary functions. Until OMB requires agencies to identify additional functions, where applicable, it will be more difficult to identify similar and potentially duplicative investments within and across government agencies. OMB and federal agencies have undertaken several initiatives to address potentially duplicative IT investments. For example, OMB has efforts under way to consolidate similar functions through its LOB and FEA initiatives and has eliminated duplicative systems identified during its TechStat sessions. In addition, several of the agencies we evaluated have established guidance for ensuring new investments are not duplicative with existing systems. However, most of OMB’s recent initiatives have not yet demonstrated results. Further, several agencies do not routinely assess legacy systems to determine if they are duplicative. Until agencies routinely assess their entire IT portfolios to identify and remove or consolidate duplicative systems, such duplication will continue to exist. OMB has multiple initiatives under way that are to identify, eliminate, or avoid duplicative IT investments. These include its E-government, LOB, and FEA initiatives, as well as targeted IT modernizations and TechStat reviews. However, the results of these initiatives are mixed. A discussion of each follows: E-government initiatives. OMB and agency officials have reported that several of the e-government initiatives were successful at reducing duplication across the government. According to OMB, the E-payroll initiative consolidated 26 separate payroll systems down to 4 e-payroll providers. Similarly, 21 agencies now use the E-gov travel service and have seen a reduction in costs. For example, according to OMB, the Department of Housing and Urban Development decreased travel voucher costs from $75 per voucher to about $13.75. According to OMB officials, their shared services initiative—still in its planning stages—is a continuation of these e-government initiatives. LOB initiatives. OMB currently has nine LOB initiatives to consolidate redundant IT investments and business processes across the federal government in the areas of case management, grants management, human resources management, federal health architecture, information systems security, budget formulation and execution, geospatial information, financial management, and IT infrastructure. According to OMB’s annual reports on e-government and LOB initiatives as of fiscal year 2010, since 2006, federal agencies have reported spending about $445 million on LOB initiatives. However, the benefits of these initiatives are mixed. In its 2011 annual report, OMB stated that agencies had made progress in developing guidance and obtaining buy-in from multiple agencies. For example, OMB reported that the federal health architecture LOB allowed federal agencies to coordinate with each other and with tribal, state, local, and private sectors to begin developing standards for health information exchanges. Similarly, OMB reported that the budget formulation and execution LOB allowed the federal budget community to begin to develop common tools and best practices. However, the 2011 annual report described demonstrated cost savings for only three LOBs, of which only two provided the estimated amount of savings. Specifically, OMB reported that the geospatial and the information systems security LOBs resulted in cost avoidance or savings of about $9 million and $7.6 million, respectively, by allowing for blanket purchase agreements. OMB also reported that the grants management LOB allowed agencies and other organizations to reduce the number of systems, but it did not provide a number or specify which systems were eliminated. FEA. When originally developed in 1999, the FEA was intended to provide federal agencies with a common construct for their architectures and thereby facilitate the coordination of common business processes, technology insertion, information flows, and system investments among federal agencies. As part of the fiscal year 2004 budget cycle, OMB required agencies to align proposed IT investments to the FEA reference models; this information was then used to develop the initial LOB initiatives. Since that time, agencies have established individual enterprise architectures and used them to characterize their IT investments and to guide plans for the future. In 2004, we reported that the FEA was a work in progress and was still evolving. To this point, the Federal Chief Enterprise Architect recently began planning changes to the FEA framework—such as updating existing reference models and adding reference models for software applications, infrastructure, and security—to further assist agencies in reducing duplication and improving mission performance. OMB’s Chief Architect reported that comprehensive changes to the FEA are planned for fiscal year 2012. Targeted IT initiatives. OMB officials reported that ongoing IT initiatives, including efforts to consolidate federal data centers and to develop trusted Internet connections, could help reduce duplication across government. Specifically, in February 2010, OMB began an initiative meant to consolidate federal data centers and hardware and software assets through virtualization, cloud computing, and consolidation. In July 2011, OMB reported that the federal government had already closed 81 centers and was on track to close 137 centers by December 2011 and 800 by 2015. However, in July 2011, we reported that federal agencies’ data center inventories and consolidation plans were incomplete and recommended that agencies complete their data center consolidation inventories and plans, and that OMB’s data center task force oversee these efforts. Separately, in November 2007, OMB announced its trusted Internet connection initiative to improve security by reducing and consolidating external network connections. However, we reported in March 2010 that none of the 23 participating agencies had yet met all of the initiative’s requirements and recommended steps to improve communication and performance measures. In addition, we recently reported on other governmentwide initiatives and found that the FedRAMP project, which is to provide, among other functions, continuous security monitoring of cloud computing systems for multiagency use, is currently behind schedule, and has not yet defined all performance metrics. Similarly, the FedSpace project, which is to provide federal employees and contractors collaboration tools for cross-agency knowledge sharing, is also behind schedule and has not defined all of its performance metrics. We recommended establishing metrics so that the benefits of these initiatives can be effectively measured. TechStat reviews. OMB works with federal agencies to identify IT projects that need increased visibility in the agency; high-risk projects are then selected for a TechStat session. This program enables the government to improve or terminate IT investments that are experiencing performance problems. According to OMB officials, based on the TechStat reviews held as of March 2011, OMB reduced the scope of three investments that agencies identified as duplicative. While promising, only a small fraction of the over 7,000 investments that were identified by agencies for fiscal year 2011 have undergone TechStat reviews. Highly performing organizations manage investments in a portfolio approach, selecting and evaluating investments by how well they support the agency mission and “de-selecting” obsolete, high-risk, and low-value IT investments. Our prior work has shown that major federal agencies have guidance for the selection and oversight of IT investments. This guidance generally calls for establishing a department-level investment review board to select the projects to be included in the agency’s IT investment portfolio. In this way, selection decisions can be made in the context of all other investments, thus minimizing duplication across investments. Officials from several of the federal agencies we reviewed stated that they routinely evaluate new investments to ensure that they are not duplicative with existing systems. For example, investment review guidance at NASA, Justice, and Agriculture requires officials to assess whether an investment is duplicative before it is approved. Further, Commerce officials explained that finding duplication is a challenge, but they attempt to identify duplication through their investment selection process and through their Commerce IT review board. However, several of the agencies do not routinely assess legacy systems to determine if they are duplicative. Specifically, officials from several agencies with billions of dollars in investments noted that they have limited staff resources for performing all of the investment control processes—including reviewing exhibit 300s and IT Dashboard data—for the entire agency. However, given the sheer number of similar investments identified earlier in this report, such as Energy’s 172 information and technology management investments, and DOD’s 657 supply chain management investments, and the large amount of funds spent on these investments, it appears that thorough assessments are justified. Until agencies routinely assess their entire IT portfolios (including both developmental and operational systems) to identify and reduce duplicative systems, such duplication will continue to exist. Federal agencies spend tens of billions of dollars on IT investments each year. However, because OMB does not enforce the definition of IT provided in the Clinger-Cohen Act, agencies exclude key categories of IT investments—such as space systems—in their annual reports on IT investments. These excluded investments are not subjected to OMB’s IT oversight process, and their associated costs are not included in OMB’s annual estimate of IT investments. As a result, the nation’s actual annual investment in IT is much higher than the $78.8 billion identified by agencies. In addition, OMB’s guidance on identifying investments’ primary functions has led to a situation in which similar systems could be in different categories. With clearer categorizations, agencies and OMB would be better positioned to identify and address duplication in their system development efforts. OMB and federal agencies have initiatives under way to help address potentially duplicative systems. While selected initiatives have had success in consolidating systems, most have not yet demonstrated results. Further, the agencies we evaluated do not routinely evaluate legacy systems to determine if they are duplicative and can be eliminated or consolidated. Until OMB and federal agencies consistently target potentially duplicative investments within and across agencies, federal agencies may continue to spend taxpayer funds developing systems that perform similar functions. To ensure that IT investments are adequately identified and categorized, we recommend that the Director of OMB take the following four actions: specify which executive branch agencies are included when discussing the annual federal IT investment portfolio; clarify guidance to federal agencies in reporting on their IT investments by specifying whether certain types of systems, such as those in research and development and space systems, should be included; revise guidance to federal agencies on categorizing IT investments to ensure that the categorizations are clear and allow agencies to choose secondary categories, where applicable, which will aid in identifying potentially duplicative investments; and require federal agencies to report the steps they take to ensure that their IT investments are not duplicative as part of their annual budget and IT investment submissions. We received oral comments on a draft of our report from OMB officials, including the Federal Chief Enterprise Architect, a senior policy analyst, and a representative from the office of the General Counsel. In those comments, OMB generally disagreed with the first two recommendations and agreed with the second two recommendations. Specifically, OMB officials requested that GAO remove the first and second recommendations because they believe that the agency has already addressed them. Regarding our recommendation to clearly identify which agencies are included when discussing the federal IT investment portfolio, agency officials noted that both the A-11 guidance and the “Frequently Asked Questions” section of the IT Dashboard clearly indicate which agencies are included in the portfolio of IT investments. However, we believe that the recommendation is warranted because on its website and in presentations, OMB frequently refers to “the federal government’s $80 billion annual investment in IT” without clarifying that this $80 billion investment does not represent the entire federal government. Regarding our recommendation to OMB to clarify its guidance to federal agencies on reporting on IT investments, agency officials noted that existing guidance (including OMB circular A-11 and OMB memo 11-29) already discusses how to identify IT investments. We believe that the recommendation is appropriate because the existing guidance does not address key categories of IT investments (such as space systems and systems in research and development) where we found inconsistencies among agencies. OMB officials stated that the agency is working to address the third and fourth recommendations. Specifically, OMB plans to update the Federal Enterprise Architecture reference models in fall 2011 to provide additional clarity on how agencies should characterize investments in order to enhance the identification of potentially duplicative investments. Also, OMB’s IT Reform Plan includes several initiatives to reduce duplicative investments, including efforts in data center consolidation, cloud computing, and shared services. Officials noted that these initiatives will continue to be pursued with agencies through the annual budget process and related reporting requirements. While we acknowledge that these initiatives offer promise in identifying and reducing duplicative investments, we believe that OMB can do more to encourage agencies to look internally for duplicative investments. We also sought comments on a draft of our report from the 10 agencies in our review. While none of the agencies agreed or disagreed with our recommendations to OMB, several provided comments. Each agency’s comments are discussed in more detail below. In an e-mail, Agriculture’s Associate CIO for Technology Planning, Architecture, and E-Government stated that the department had no formal comments on the report. In written comments, the Acting Secretary of Commerce noted that the report thoroughly assessed OMB’s policy and guidance, and fairly assessed Commerce’s IT information and data. Commerce’s written comments are provided in appendix IV. In comments provided via e-mail, an official from DOD’s CIO office provided updated data for DOD’s IT investments. We did not make these changes in our report because we used data as of July 2011 throughout the report for our analysis. In written comments, HHS’s Assistant Secretary for Legislation agreed with the broad findings of the report and pointed out a distinction between OMB policies and guidance. The agency believes that this distinction is an issue that needs to be addressed by OMB and all federal agencies. We agree that it is appropriate for OMB and federal agencies to work together to determine if there is to be a meaningful distinction between OMB’s policies and its guidance to agencies. However, this distinction does not detract from our recommendation that OMB clarify its guidance to agencies on reporting on their IT investments. HHS’s written comments are provided in appendix V. In written comments, DHS’s Director of the Departmental GAO/OIG Liaison Office noted that the agency remains committed to continuing its work with OMB and other relevant stakeholders to address challenges related to identifying and eliminating potentially duplicative systems. DHS’s written comments are provided in appendix VI. In an e-mail, Justice’s Acting Assistant Director of the Audit Liaison Group stated that the department did not have comments. In comments provided via e-mail, Transportation’s Deputy Director of Audit Relations stated that the Positive Train Control system should not be included in the department’s exhibit 53 submission because the system will be commercialized, owned, and implemented by an industry. We used this system as an example of a system in research and development that is not included in the federal portfolio of IT investments. Because the agency is expending funds on this system and it is meant to integrate command, control, communications, and information systems, we believe that it should be reported as an IT investment. This example reinforces our recommendation to OMB to clarify its guidance to federal agencies to specify whether such investments should be included. In an e-mail, Treasury’s Audit Liaison stated that the department had no comments on the report. In an e-mail, an official from VA’s Office of Congressional and Legislative Affairs reported that the agency had no comments on the draft report. In an e-mail, NASA’s GAO/OIG Audit Liaison stated that the agency had no comments or technical corrections to add to the report. OMB and several agencies also provided technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate Congressional committees, the Director of the Office of Management and Budget, and other interested parties. In addition, this report will be available on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-9286 or at pownerd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VII. Our objectives were to (1) describe the current number and types of information technology (IT) investments reported by federal agencies on the IT Dashboard, (2) evaluate the adequacy of the Office of Management and Budget’s (OMB) guidance to federal agencies in reporting on IT investments, and (3) evaluate efforts to identify and address potentially duplicative investments. To describe the current number and types of IT investments, we analyzed data from agencies’ fiscal year 2011 exhibit 53 submissions. We downloaded this data from OMB’s IT Dashboard in March and July 2011. To categorize the investments, we used the functional categories that each agency identified for its own investments. We developed charts and graphs depicting IT investments by investment type (major or nonmajor), by life cycle phase (in development or in operations and maintenance), by agency, and by functional category. We then discussed the results of our analysis with OMB officials. To determine the reliability of the data on the IT Dashboard, we reviewed recent GAO reports that identified issues with the accuracy and reliability of agency data on the IT Dashboard. We determined that the data were sufficiently reliable for the purpose of this report, which is to depict the groupings and categories of information drawn from the Dashboard. To evaluate the adequacy of OMB’s guidance to federal agencies in reporting on IT investments, we reviewed OMB’s guidance on agencies’ exhibit 53 and exhibit 300 submissions. In addition, we evaluated how 10 federal agencies implemented OMB’s guidance. We selected the 10 agencies with the largest IT spending as reported in OMB’s fiscal year 2010 exhibit 53 data: the Departments of Agriculture, Commerce, Defense, Health and Human Services, Homeland Security, Justice, Transportation, the Treasury, and Veterans Affairs, and the National Aeronautics and Space Administration. We reviewed the guidance these agencies provided to their program managers for reporting on IT investments and identified types of investments that were excluded from reporting. We also met with OMB and agency officials to discuss current guidance on reporting on IT investments and any planned changes to this guidance. To evaluate efforts to identify and address potentially duplicative investments, we met with OMB officials to understand their responsibilities and processes related to identifying and addressing duplication. Then we analyzed documentation related to those processes, including the 2011 report to Congress on OMB’s e-government initiatives, OMB’s 25-point plan to improve IT, and our previous work on e-government initiatives, the Federal Enterprise Architecture, the Federal Data Center Consolidation initiative, and the trusted Internet connection initiative. We also analyzed documentation from the agencies in our review, including capital planning and investment control guides, investment selection criteria, and documentation from investment review board meetings, and we interviewed officials. We conducted this performance audit from February 2011 to September 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The figures in this appendix provide information on selected federal agencies’ planned IT investments in fiscal year 2011. Unless otherwise stated, these figures include both major and nonmajor IT investments. The figures below show the number of investments that federal agencies have categorized in three key primary functions. For each primary function, the first figure shows a visual depiction of selected federal agencies, including the Department of Defense (DOD). The second figure provides more detail on the nondefense agencies. Unless otherwise stated, these figures include both major and nonmajor IT investments. Dave Powner at (202) 512-9286 or pownerd@gao.gov. In addition to the individual named above, the following staff also made key contributions to the report: Colleen Phillips, Assistant Director; Kate Agatone; Rebecca Eyler; Fatima Jahan; Lee McCracken; and Kevin Walsh. | The federal government invests heavily in information technology (IT). In recent years, the Office of Management and Budget (OMB) has made efforts to improve the transparency, oversight, and management of the federal government's IT investments. More recently, in June 2009, OMB deployed the IT Dashboard, a Web-based system that provides detailed performance information on federal IT investments. GAO was asked to (1) describe the current number and types of IT investments reported by federal agencies on the IT Dashboard, (2) evaluate the adequacy of OMB's guidance to federal agencies in reporting on IT investments, and (3) evaluate efforts to identify and address potentially duplicative investments. To address these objectives, GAO analyzed data from the IT Dashboard, analyzed 10 federal agencies' investment guidance and reports, and interviewed agency officials. According to data reported on OMB's IT Dashboard in July 2011, 26 federal agencies plan to spend almost $79 billion on 7,248 IT investments in fiscal year 2011. OMB often uses the $79 billion figure in referring to annual federal investments in IT; however, it is important to note that this figure does not reflect the spending of the entire federal government. It does not include IT investments by 58 independent executive branch agencies, including the Central Intelligence Agency, or by the legislative or judicial branches. A closer look at the $79 billion in investments for the 26 agencies reveals that (1) the expenditures are split almost evenly between major and nonmajor (in terms of cost, risk, and other factors) investments; (2) about two-thirds of the expenditures are for systems in an operational state, while about one-third of the expenditures provide for the development of new systems; and (3) there are hundreds of investments providing similar functions across the federal government. For example, agencies reported 1,536 information and technology management investments, 781 supply chain management investments, and 661 human resource management investments. OMB provides guidance to agencies on how to report on their IT investments, but this guidance does not ensure complete reporting or facilitate the identification of duplicative investments. Specifically, agencies differ on what investments they include as an IT investment; for example, 5 of the 10 agencies GAO reviewed consistently consider investments in research and development systems as IT, and 5 do not. As a result, the 26 federal agencies' annual IT investments are likely greater that the $79 billion reported in fiscal year 2011. In addition, OMB's guidance to federal agencies requires each investment to be mapped to a single functional category. This limits OMB's ability to identify duplicative investments both within and across agencies because similar investments may be organized into different categories. OMB and federal agencies have undertaken several initiatives to address potentially duplicative IT investments. For example, OMB has efforts under way to consolidate similar functions through its "line of business" initiatives and has reduced the scope of three duplicative systems identified during executive reviews of high-priority projects. In addition, most of the agencies GAO reviewed established guidance for ensuring new investments are not duplicative with existing systems. However, most of OMB's recent initiatives have not yet demonstrated results. Further, agencies do not routinely assess operational systems to determine if they are duplicative. Until agencies routinely assess their IT investment portfolios to identify and reduce duplicative systems, the government's current situation of having hundreds of similar IT investments will continue to exist. GAO is recommending that OMB clarify its reporting on IT investments and improve its guidance to agencies on identifying and categorizing IT investments. OMB did not agree that further efforts were needed to clarify reporting. Given the importance of continued improvement in OMB's reporting and guidance, GAO maintains its recommendations are warranted. |
Over the past 20 years, DOD has been engaged in an effort to modernize its aging tactical aircraft force. The F-22A and JSF, along with the F/A-18E/F, are the central elements of DOD’s overall recapitalization strategy for its tactical air forces. The F-22A was developed to replace the F-15 air superiority aircraft. The continued need for the F-22A, the quantities required, and modification costs to perform its mission have been the subject of a continuing debate within DOD and the Congress. Supporters cite its advanced features—stealth, supercruise speed, maneuverability, and integrated avionics—as integral to the Air Force’s Global Strike initiative and for maintaining air superiority over potential future adversaries. Critics argue that the Soviet threat it was originally designed to counter no longer exists and that its remaining budget dollars could be better invested in enhancing current air assets and acquiring new and more transformational capabilities that will allow DOD to meet evolving threats. Its fiscal year 2007 request includes $800 million for continuing development and modifications for aircraft enhancements such as equipping the F-22A with an improved ground attack capability and improving aircraft reliability. The request also includes about $2.0 billion for advance procurement of parts and funding of subassembly activities for the initial 20 aircraft of a 60-aircraft multiyear procurement. JSF is a replacement for a substantial number of aging fighter and attack aircraft currently in the DOD inventory. For the Air Force, it is intended to replace the F-16 and A-10 while complementing the F-22A. For the Marine Corps, the JSF is intended to replace the AV-8B and F/A-18 A/C/D; for the Navy, the JSF is intended to complement the F/A-18E/F. DOD estimates that as currently planned, it will cost $257 billion to develop and procure about 2,443 aircraft and related support equipment, with total costs to maintain and operate JSF aircraft adding $347 billion over the program’s life cycle. After 9 years in development, the program plans to deliver its first flight test aircraft later this year. The fiscal year 2007 budget request includes $4 billion for continuing development and $1.4 billion for the purchase of the first 5 procurement aircraft, initial spares, and advance procurement for 16 more aircraft to be purchased in 2008. We have frequently reported on the importance of using a sound, executable business case before committing resources to a new product development. In its simplest form, such a business case is evidence that (1) the warfighter’s needs are valid and can best be met with the chosen concept and quantities, and (2) the chosen concept can be developed and produced within existing resources—that is, proven technologies, design knowledge, adequate funding, and adequate time to deliver the needed product. At the heart of a good business case is a knowledge-based strategy to product development that demonstrates high levels of knowledge before significant commitments of time and money are made. The future of DOD’s tactical aircraft recapitalization depends largely on the outcomes of the F-22A and JSF programs—which represent about $245 billion in investments to be made in the future. Yet achieving expected outcomes for both these programs continues to be fraught with risk. We have reported that the F-22A’s original business case is unexecutable and does not reflect changing conditions over time. Currently, there is a significant mismatch between the Air Force’s stated need for F-22A aircraft and the resources the Office of the Secretary of Defense (OSD) is willing to commit. The business case for the JSF program, which has 90 percent of its investments still in the future, significantly overlaps production with development and system testing—a strategy that often results in cost and schedule increases. Both programs are at critical junctures that require DOD to make important business decisions. According to the Air Force, a minimum of 381 modernized F-22A aircraft are needed to satisfy today’s national strategic requirements—a buy that is roughly half the 750 aircraft originally planned, but more than double the 183 aircraft OSD states available funding can support. Since the Air Force began developing the F-22A in 1986, the business case for the program has changed radically— threats have changed, requirements have been added, costs have increased, funds have been added, planned quantities have been reduced, and deliveries of the aircraft to the warfighter have been delayed. There is a 198-aircraft capability gap today. Decisions in the last 2 years have worsened the mismatch between Air Force requirements and available resources, further weakening the F-22A program’s business case. Without a new business case, an agreement on an appropriate number of F-22As for our national defense, it is uncertain as to whether additional investments in the program are advisable. The original business case for the F-22A program was to develop air superiority fighters to counter a projected threat of significant quantities of advanced Soviet fighters. During the 19-year F-22A development program, that threat did not materialize to the degree expected. Today, the requirements for the F-22A have evolved to include what the Air Force has defined as a more robust ground attack capability to destroy expected air defense systems and other ground targets and an intelligence-gathering capability. However, the currently configured F-22A is not equipped to carry out these roles without further investments in its development. The F-22As modernization program is currently being planned for three basic blocks, or spirals, of increasing capability to be developed and delivered over time. Current Air Force estimates of modernization costs, from 2007 through 2016, are about $4.3 billion. Additional modernization is expected, but the content and costs have not been determined or included in the budget. OSD has restructured the acquisition program twice in the last 2 years to free up funds for other priorities. In December 2004, DOD reduced the program to 179 F-22As to save about $10.5 billion. This decision also terminated procurement in 2008. In December 2005, DOD changed the F- 22A program again, adding $1 billion to extend production for 2 years to ensure a next-generation fighter aircraft production line would remain in operation in case JSF experienced delays or problems. It also added 4 aircraft for a total planned procurement of 183 F-22As. As part of the 2005 change, aircraft previously scheduled in 2007 will not be fully funded until 2008 or later. OSD and the Air Force plan to buy the remaining 60 F-22As in a multiyear procurement that would buy 20 aircraft a year for 3 years—2008 through 2010. The Air Force plans to fund these aircraft in four increments—an economic order quantity to buy things cheaper; advanced procurement for titanium and other materials and parts to protect the schedule; subassembly; and final assembly. The Air Force plans to provide Congress a justification for multiyear procurement in May 2006 and the fiscal year 2007 President’s Budget includes funds for multiyear procurement. The following table shows the Air Force’s plan for funding the multiyear procurement. Air Force officials have told us that an additional $400 million in funds are needed to complete the multiyear procurement and that the accelerated schedule to obtain approval and start the effort adds risk to the program, creating more weaknesses in the current F-22A business case. A 198-aircraft gap between what the Air Force needs and what is affordable raises questions about what additional capabilities need to be included in the F-22A program. In March 2005, we recommended that the Air Force develop a new business case that justified additional investments in modernizing the aircraft to include greater ground attack and intelligence-gathering capabilities before moving forward. DOD responded to our report that business case decisions were handled annually in the budget decisions and that the QDR would analyze requirements for the F-22A and make program decisions. However, it is not clear from the QDR report, issued last month, what analyses were conducted to determine the gaps in capability, the alternatives considered, the quantities needed, or the costs and benefits of the F-22A program. Therefore, questions about the F-22A program remain: What capability gaps exist today and will exist in the future (air superiority, ground attack, electronic attack, intelligence gathering)? What alternatives besides the F-22A can meet these needs? What are the costs and benefits of each alternative? How many F-22As are needed? What capabilities should be included? Until these questions are answered and differences are reconciled, further investments in the program—for either the procurement of new aircraft or modernization—cannot be justified. The JSF program appears to be on the same path as the F-22A program. After being in development for 9 years, the JSF program has not produced the first test aircraft, has experienced substantial cost growth, has reduced the number of planned aircraft, and has delayed delivery of the aircraft to the warfighter. Moreover, the JSF program remains committed to a business case that invests heavily in production before testing has demonstrated acceptable performance of the aircraft. At the same time, the JSF program has contracted to develop and deliver the aircraft’s full capability in a single-step, 12-year development program—a daunting task given the need to incorporate the technological advances that, according to DOD, represent a quantum leap in capability. The business case is a clear departure from the DOD policy preference that calls for adopting an evolutionary approach to acquisitions. Furthermore, the length and cost of the remaining development are exceedingly difficult to accurately estimate, thereby increasing DOD’s risks in contracting for production. With this risky approach, it is likely that the program will continue to experience significant cost and schedule overruns. The JSF program expects to begin low-rate initial procurement in 2007 with less than 1 percent of the flight test program completed and no production representative prototypes built for the three JSF variants. Technologies and features critical to JSF’s operational success, such as a low observable and highly common airframe, advanced mission systems, and maintenance prognostics systems, will not have been demonstrated in a flight test environment when production begins. Other key demonstrations that will have not been either started or only in the initial stages before production begins include testing with a fully integrated aircraft—mission systems and full software, structural and fatigue testing of the airframe, and shipboard testing of Navy and Marine Corps aircraft. When the first fully integrated and capable development JSF is expected to fly in 2011, DOD will already have committed to buy 190 aircraft at an estimated cost of $26 billion. According to JSF program plans, DOD’s low- rate initial production quantities will increase from 5 aircraft a year in 2007 to 133 a year in 2013, when development and initial operational testing are completed. By then, DOD will have procured more than double that amount—424 aircraft at an estimated cost of about $49 billion, and spending for monthly production activities is expected to be about $1 billion, an increase from $100 million a month when production is scheduled to begin in 2007. Figure 1 shows the significant overlap in development and testing and the major investments in production. The overlap in testing and production is the result of a business case and acquisition strategy that has proven to be risky in past programs like F-22A, Comanche, and B-2A, which far exceeded the cost and delivery goals set at the start of their development programs. JSF has already increased its cost estimate and delayed deliveries despite a lengthy replanning effort that added over $7 billion and 18 months to the development program. JSF officials have stated that the restructured program has little or no flexibility for future changes or unanticipated risks. The program has planned about 8 years to complete significant remaining activities of the system development and demonstration phase, including fully maturing 7 of the 8 critical technologies; completing the designs and releasing the engineering drawings for all manufacturing and delivering 15 flight test aircraft and 7 ground test developing 19 million lines of software code; and completing a 7-year, 12,000-hour flight test program. The JSF program’s latest planned funding profile for development and procurement, produced in December 2004 by the JSF program office, assumes annual funding rates to hover close to $13 billion between 2012 and 2022, peaking at $13.8 billion in 2013. If the program fails to achieve its current estimated costs, funding challenges could be even greater than that. The Office of Secretary of Defense Cost Analysis Improvement Group was to update its formal independent cost estimate in the spring of 2005. The group now does not expect to formally complete its estimate until spring 2006, but its preliminary estimate was substantially higher than the program office’s. A modest cost increase would have dramatic impacts on funding. For example, a 10 percent increase in production costs would amount to over $21 billion (see fig. 2). DOD has recently made decisions to reduce near-term funding requirements that could cause future JSF costs to increase. It had begun to invest in the program to develop an alternative engine for the aircraft, but now plans to cancel further investments in order to make the remaining funds available for other priorities. According to DOD, it believes that there is no cost benefit or savings with an engine competition for the JSF and there is low operational risk with going solely with a single engine supplier. DOD has already invested $1.2 billion in funding for this development effort through fiscal year 2006. By canceling the program, it expects to save $1.8 billion through fiscal year 2011. Developing alternative engines is a practice that has been used in past fighter aircraft development programs like the F-16 and F-15 programs. An alternative engine program may help maintain the industrial base for fighter engine technology, result in price competition in the future for engine acquisition and spare parts, instill incentives to develop a more reliable engine, and ensure an operational alternative should the current engine develop a problem that would ground the entire fleet of JSF aircraft. As result, the JSF decision should be supported by a sound business case analysis. To date, we have not seen such an analysis. Finally, the uncertainties inherent in concurrently developing, testing, and producing the JSF aircraft prevent the pricing of initial production orders on a fixed price basis. Consequently, the program office plans to place initial procurement orders on cost reimbursement contracts. These contracts will provide for payment of allowable incurred costs, to the extent prescribed in the contract. With cost reimbursement contracts a greater cost risk is placed on the buyer—in this case, DOD. For the JSF, procurement should start when risk is low enough to enter into a fixed price agreement with the contractor based on demonstrations of the fully configured aircraft and manufacturing processes. DOD has not been able to achieve its recapitalization goals for its tactical aircraft forces. Originally, DOD had planned to buy a total of 4,500 tactical aircraft to replace the aging legacy force. Today, because of delays in the acquisition programs, increased development and procurement costs, and affordability pressures, it plans to buy almost one-third fewer tactical aircraft (see fig. 3). The delivery of these new aircraft has also been delayed past original plans. DOD has spent nearly $75 billion on the F-22A and JSF programs since they began, but this accounts for only 122 new operational aircraft. Because DOD’s recapitalization efforts have not materialized as planned, many aircraft acquired in the 1980s will have to remain in the inventory longer than originally expected, incurring higher investment costs to keep them operational. According to DOD officials, these aging aircraft are approaching the end of their service lives and are costly to maintain at a high readiness level. While Air Force officials assert that aircraft readiness rates are steady, they agree that the costs to operate and maintain its aircraft over the last decade have risen substantially. Regardless, the military utility of the aging aircraft is decreasing. The funds used to operate, support, and upgrade the current inventory of legacy aircraft represent opportunity costs that could be used to develop and buy new aircraft. From fiscal years 2006 to 2011, DOD plans to spend about $57 billion for operations and maintenance and military personnel for legacy tactical fighter aircraft. Some of these funds could be invested in newer aircraft that would be more capable and less costly to operate. For example, the Air Force Independent Cost Estimate Summary shows that the F-22A will be less expensive to operate than the F-15. The F-22A will require fewer maintenance personnel for each squadron, and one squadron of F-22As can replace two squadrons of F-15. This saves about 780 maintenance personnel as well as about $148 million in annual operating and support cost according to the independent cost estimate. Over the same time frame, DOD also plans to spend an average of $1.5 billion each year—-or $8.8 billion total—to modernize or improve legacy tactical fighter aircraft (see fig. 4). Further delays or changes in the F-22A or JSF programs could require additional funding to keep legacy aircraft in the inventory and relevant to the warfighter’s needs. In testimony last year, we suggested that the QDR would provide an opportunity for DOD to assess its tactical aircraft recapitalization plans and weigh options for accomplishing its specific and overarching goals. In February 2006, the Secretary of Defense testified that recapitalization of DOD’s tactical aircraft is important to maintain America’s air dominance. Despite this continued declaration about recapitalizing tactical aircraft, DOD’s 2006 QDR report did not present a detailed investment strategy that addressed needs and gaps, identified alternatives, and assessed costs and benefits. With limited information contained in the QDR report, many questions are still unanswered about the future of DOD’s tactical aircraft modernization efforts. As DOD moves forward with its efforts to recapitalize its tactical aircraft force, it has the opportunity to reduce operating costs and deliver needed capabilities to the warfighter more quickly. To take advantage of this opportunity, however, DOD must fundamentally change the way it buys weapon systems. Specifically, the department must change how it selects weapon systems to buy, and how it establishes and executes the business case. Although the F-22A program has progressed further in the acquisition process than the JSF program, both programs are at critical decision-making junctures, and the time for DOD to implement change is now. Before additional investments in the F-22A program are made, DOD and the Air Force must agree on the aircraft’s capabilities and quantities and the resources that can be made available to meet these requirements. A cost and benefit analysis of F-22A capabilities and alternative solutions weighed against current and expected threats is needed to determine whether a sound business case for the F-22A is possible and whether investing an additional $13.8 billion over the next 5 years to procure or modernize these aircraft is justified. With more than 90 percent of investment decisions to develop, test, and buy JSF aircraft remaining, DOD could implement significant changes in its business case before investing further in the JSF program. The JSF program should delay production and investments in production capability until the aircraft design qualities and integrated mission capabilities of the fully configured and integrated JSF aircraft variants have been proven to work in flight testing. Also, an evolutionary acquisition strategy to limit requirements for the aircraft’s first increment of capabilities that can be achieved with proven technologies and available resources could significantly reduce the JSF program’s cost and schedule risks. Such a strategy would allow the program to begin testing and low-rate production sooner and, ultimately, to deliver a useful product in sufficient quantities to the warfighter sooner. Once the JSF is delivered, DOD could begin retiring its aging and costly tactical aircraft. Capabilities that demand as yet undemonstrated technologies would be included as requirements in future JSF aircraft increments that would be separately managed. An evolutionary, knowledge-based acquisition approach would not only help significantly minimize risk and deliver capabilities to the warfighter sooner, it would be in line with current DOD policy preferences. DOD’s use of an evolutionary, knowledge-based approach is not unprecedented. The F-16 program successfully evolved capabilities over the span of 30 years, with an initial F-16 capability delivered to the warfighter about 4 years after development started. Figure 5 illustrates the F-16 incremental development approach. The F-16 program provides a good acquisition model for the JSF program. For JSF, an evolutionary approach could entail delivering a first increment aircraft with at least as much capability as legacy aircraft with sufficient quantities to allow DOD to retire its aging tactical aircraft sooner and reduce operating inefficiencies. Limiting development to 5-year increments or less, as suggested in DOD’s acquisition policy, would force smaller, more manageable commitments in capabilities and make costs and schedules more predictable. Some of the more challenging JSF capabilities, such as advanced mission systems or prognostics technologies, would be deferred and added to follow-on efforts once they are demonstrated in the technology development environment—a more conducive environment to maturing and proving new technologies. A shorter system development phase would have other important benefits. It would allow DOD to align a program manager’s tenure to the completion of the phase, which would enable program managers to be held accountable for decisions. It also would allow DOD to use fixed-price-type contracts for production, and thereby reduce the government’s cost risk. Additionally, DOD should do a more comprehensive business case analysis of the costs, benefits and risks before terminating the alternative engine effort. A competitive engine program may (1) incentivize contractors’ to minimize life cycle costs; (2) improve engine reliability and quality in the future; (3) provide operational options; and (4) maintain the industrial base. At a broader level, DOD needs to make more substantive changes to its requirements, funding, and acquisition processes to improve weapon system program outcomes. We have recommended these changes in past reports and DOD has agreed with them. The January 2006 Defense Acquisition Performance Assessment report, based on a study directed by the Deputy Secretary of Defense, made some important observations regarding DOD acquisitions. The report concluded that the current acquisition process is slow, overly complex, and incompatible with meeting the needs of DOD in a diverse marketplace. Notably, the report confirmed that a successful acquisition process must be based on requirements that are relevant, timely, informed by the combatant commanders, and supported by mature technologies and resources necessary to realize development. The report also pointed out that DOD’s acquisition process currently operates under a “conspiracy of hope,” striving to achieve full capability in a single step and consistently underestimating what it would cost to attain this capability. The report makes a number of key recommendations for changing DOD’s acquisition process including the following: develop a new requirements process that has greater combatant commander involvement and is time-phased, fiscally informed, and jointly prioritized; change the current acquisition policy to ensure a time-constrained development program is strictly followed; keep program managers from the start of development through delivery of the “Beyond Low-Rate Initial Production Report”; and move the start of a development program to the point in time that a successful preliminary design review is completed. Our work in weapons acquisition and best practices over the past several years has drawn similar conclusions. We have made numerous recommendations on DOD’s acquisition processes and policy—as well as recommendations on specific major weapon system programs—to improve cost, schedule, and performance outcomes and to increase accountability for investment decisions. In 2000, DOD revised its acquisition policy to address some of our recommendations. Specifically, DOD has written into its policy an approach that emphasizes the importance of knowledge at critical junctures before managers agree to invest more money in the next phase of weapon system development. Theoretically, a knowledge-based approach results in evolutionary—that is, incremental, manageable, predictable—development and uses controls to help managers gauge progress in meeting cost, schedule, and performance goals. However, DOD policy lacks the controls needed to ensure effective implementation of this approach. Furthermore, decision makers have not consistently applied the necessary discipline to implement its acquisition policy and assign much-needed accountability for decisions and outcomes. Some of key elements of acquisition that we believe DOD needs to focus on include the following: constraining individual program requirements by working within available resources and by leveraging systems engineering; establishing clear business cases for each individual investment; enabling science and technology organizations to shoulder the ensuring that the workforce is capable of managing requirements trades, source selection, and knowledge-based acquisition strategies; establishing and enforcing controls to ensure appropriate knowledge is captured and used at critical junctures before moving programs forward and investing more money; and aligning tenure for program managers that matches the program’s acquisition time to ensure greater accountability for outcomes. In conclusion, despite DOD’s repeated declaration that recapitalizing its aging tactical aircraft fleet is a top priority, the department continues to follow an acquisition strategy that consistently results in escalating costs that undercut DOD’s buying power, forces DOD to reduce aircraft purchases, and delays delivering needed capabilities to the warfighter. Continuing to follow a strategy that results in disappointing outcomes cannot be encouraged—particularly given our current fiscal and national security realities. Mr. Chairman, this concludes my prepared statement. I will be happy to answer any questions you or other members of the subcommittee may have. Joint Strike Fighter: DOD Plans to Enter Production before Testing Demonstrates Acceptable Performance, GAO-06-356 (Washington D.C.: March 15, 2006). Defense Acquisitions: Business Case and Business Arrangements Key for Future Combat System’s Success, GAO-06-478T (Washington D.C.: March 1, 2006). Defense Acquisitions: DOD Management Approach and Processes Not Well-Suited to Support Development of Global Information Grid, GAO-06- 211, (Washington D.C.: January 30, 2006). Defense Acquisitions: DOD Has Paid Billions in Award and Incentive Fees Regardless of Acquisition Outcomes, GAO-06-66, (Washington D.C.: December 19, 2005). Unmanned Aircraft Systems: Global Hawk Cost Increase Understated in Nunn-McCurdy Report, GAO-06-222R, (Washington D.C.: December 15, 2005) DOD Acquisition Outcomes: A Case for Change, GAO-06-257T, (Washington D.C.: November 15, 2005). Defense Acquisitions: Progress and Challenges Facing the DD(X) Surface Combatant Program GAO-05-924T. (Washington D.C.: 07/19/2005). Defense Acquisitions: Incentives and Pressures That Drive Problems Affecting Satellite and Related Acquisitions. GAO-05-570R. (Washington D.C.: 06/23/2005). Defense Acquisitions: Resolving Development Risks in the Army’s Networked Communications Capabilities is Key Fielding Future Force. GAO-05-669 (Washington D.C.: 06/15/2005). Progress of the DD(X) Destroyer Program. GAO-05-752R. (Washington D.C.: 06/14/2005) Tactical Aircraft: F/A-22 and JSF Acquisition Plans and Implications for Tactical Aircraft Modernization. GAO-05-519T. (Washington D.C.: 04/06/2005). Defense Acquisitions: Assessments of Selected Major Weapon Programs. GAO-05-301 (Washington D.C.: 03/31/2005). Defense Acquisitions: Future Combat Systems Challenges and Prospects for Success. GAO-05-428T. (Washington D.C.: 03/16/2005). Defense Acquisitions: Changes in E-10A Acquisition Strategy Needed Before Development Starts. GAO-05-273 (Washington D.C.: 03/15/2005). Defense Acquisitions: Future Combat Systems Challenges and Prospects for Success. GAO-05-442T (Washington D.C.: 03/15/2005). Tactical Aircraft: Air Force Still Needs Business Case to Support F/A-22 Quantities and Increased Capabilities. GAO-05-304. (Washington D.C.: 03/15/2005). Tactical Aircraft: Opportunity to Reduce Risks in the Joint Strike Fighter Program with Different Acquisition Strategy. GAO-05-271. (Washington D.C.: 03/15/2005). Tactical Aircraft: Status of F/A-22 and JSF Acquisition Programs and Implications for Tactical Aircraft Modernization. GAO-05-390T (Washington D.C.: 03/03/2005). Defense Acquisitions: Plans Need to Allow Enough Time to Demonstrate Capability of First Littoral Combat Ships. GAO-05-255 (Washington D.C.: 03/01/2005). Defense Acquisitions: Improved Management Practices Could Help Minimize Cost Growth in Navy Shipbuilding Programs. GAO-05-183 (Washington D.C.: 02/28/2005). Unmanned Aerial Vehicles: Changes in Global Hawk’s Acquisition Strategy Are Needed to Reduce Program Risks. GAO-05-06 (Washington D.C.: 11/05/2004). This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The Department of Defense's (DOD) F-22A and Joint Strike Fighter (JSF) programs aim to replace many of the Department's aging tactical fighter aircraft--many of which have been in DOD's inventory for more than 20 years. Together, the F-22A and JSF programs represent a significant investment for DOD--currently estimated at almost $320 billion. GAO has reported on the poor outcomes in DOD's acquisitions of tactical aircraft and other major weapon systems. Cost and schedule overruns have diminished DOD's buying power and delayed the delivery of needed capabilities to the warfighter. Last year, GAO testified that weaknesses in the F-22A and JSF programs raised questions as to whether DOD's overarching tactical aircraft recapitalization goals were achievable. At the request of this Subcommittee, GAO is providing updated testimony on (1) the extent to which the current F-22A and JSF business cases are executable, (2) the current status of DOD's tactical aircraft recapitalization efforts, and (3) potential options for recapitalizing the air forces as DOD moves forward with its tactical aircraft recapitalization efforts. The future of DOD's tactical aircraft recapitalization depends largely on the outcomes of the F-22A and JSF programs--which represent about $245 billion in investments to be made in the future. Both programs continue to be burdened with risk. The F-22A business case is unexecutable in part because of a 198 aircraft gap between the Air Force requirement and what DOD estimates it can afford. The JSF program, which has 90 percent of its investments still in the future, plans to concurrently test and produce aircraft thus weakening DOD's business case and jeopardizing its recapitalization efforts. It plans to begin producing aircraft in 2007 with less than 1 percent of the flight test program completed. DOD's current plan to buy about 3,100 new major tactical systems to replace its legacy aircraft represents a 33-percent reduction in quantities from original plans. With reduced buys and delays in delivery of the new systems, costs to keep legacy aircraft operational and relevant have increased. While the Secretary of Defense maintains that continued U.S. air dominance depends on a recapitalized force, DOD has not presented an investment strategy for tactical aircraft systems that measures needs, capability gaps, alternatives, and affordability. Without such a strategy, DOD cannot reasonably ensure it will recapitalize the force and deliver needed capabilities to the warfighter within cost and schedule targets. As DOD moves forward with its efforts to recapitalize its tactical aircraft, it needs to rethink the current business cases for the F-22A and JSF programs. This means matching needs and resources before more F-22A aircraft are procured and ensuring the JSF program demonstrates acceptable aircraft performance before it enters initial production. |
FHWA assists states’ efforts in building and maintaining highways through the federal-aid highway program. The agency distributes highway funds to the states through annual apportionments established by statutory formulas and by allocating discretionary grants. The states may obligate funds for construction, reconstruction, and improvement of highways and bridges on eligible federal-aid highway routes and for other purposes authorized in law once FHWA has apportioned the funds to the states. About 1 million of the nation’s 4 million miles of roads are eligible for federal aid. As a condition of receiving federal funds, states must adhere to federal laws and regulations. In particular, states must ensure that their highway program activities comply with title 23 of the United States Code (U.S.C.) and title 23 of the Code of Federal Regulations (C.F.R.), which contain provisions relating to the federal-aid highway program. FHWA has issued a number of regulations to implement and carry out these provisions. These provisions in title 23 relate specifically to states’ use of consultants and contractors. For example, states must comply with the Disadvantaged Business Enterprise Program requirements of 49 C.F.R. Part 26, which requires that a certain percentage of contracts be awarded to small businesses owned and controlled by socially and economically disadvantaged individuals, including minority and women-owned businesses. Contracts for engineering and design services that are directly related to a construction project and use federal-aid highway funding must be awarded in the same manner as a contract for engineering and design services under certain provisions of the Brooks Architect-Engineers Act, which establishes a qualifications-based selection process in which contracts for architects and engineers are negotiated on the basis of demonstrated competence and qualification for the type of professional services required at a fair and reasonable price. While state DOTs are subject to many federal laws and regulations regarding contracting, they are not required to follow the Federal Acquisition Regulation when contracting for federally funded highway activities, except for the cost principles in 48 C.F.R. Part 31. Other specific federal provisions relating to state DOTs’ contracting practices are summarized in table 1. For projects using federal-aid funding, FHWA has also promulgated regulations that establish design, construction, and materials standards for highway projects that are on the National Highway System. In general, states’ laws, regulations, directives, safety standards, design standards, and construction standards apply to highway projects that are off of the National Highway System. FHWA has authority to oversee any project that receives federal-aid highway funds. However, the agency has increasingly delegated responsibility for oversight to state DOTs since the passage of the Intermodal Surface Transportation Equity Act in 1991. Oversight roles and responsibilities are outlined in stewardship agreements that each FHWA division office executes with its respective state DOT. These stewardship agreements outline when FHWA will have project-level oversight, or what is known as “full oversight” over a project, and when that responsibility will be delegated to states. Stewardship agreements vary in how full oversight is determined. A stewardship agreement may indicate that full oversight occurs on only “high-profile” projects, which will be agreed upon by the state and the division office, or there may be a specific dollar threshold, such as all interstate projects that are over $1 million. Generally speaking, FHWA has project-level oversight for a relatively limited number of federal-aid projects. Recently, FHWA developed guidance on the development of stewardship agreements and encouraged its division offices to revise their agreements on the basis of this guidance to achieve more consistency throughout the agency. Among other things, the guidance encourages the division offices to use risk management principles to determine where to focus their stewardship activities. The guidance also recommends that division offices develop performance measures to better track the health of the federal-aid highway program in their states. However, the guidance gives state DOTs and division offices broad flexibility in determining how risks should be assessed and how performance should be measured. In addition to having oversight over some specific projects, FHWA division offices oversee state DOTs through reviews of the departments’ programs and processes. Some of these reviews occur annually, and others are undertaken at the discretion of the division office on the basis of areas where there may be increasing risk to the highway program. These reviews are meant to ensure that states have adequate controls in place to effectively manage federally assisted projects and will generally result in recommendations and corrective actions for the state DOTs. Over the past several years, GAO has expressed concerns about FHWA’s oversight role. For example, we reported in 2005 that FHWA lacked a comprehensive approach in its oversight efforts. We found that even though FHWA had made progress in improving its oversight efforts, such as establishing performance goals and outcome measures to limit cost growth and schedule slippage on projects, FHWA had not linked these efforts to its day-to-day activities and was not using them to identify problems and target oversight. More generally, we have also raised concerns about federal transportation policy. For example, we have reported that federal transportation funding is not linked to system performance; that the federal government does not have direct control over the vast majority of the activities that it funds; and that highway grant funds are apportioned to state and local governments by formula, without regard to the needs, performance, quality, or level of effort of recipients. Transportation and other experts recently told us that the nation’s transportation policy has lost focus, and that the nation’s overall transportation goals need to be better defined and linked to performance measures that evaluate what the respective policies and programs actually accomplish. State DOTs have increased the amount and type of highway activities that they have contracted out to consultants and contractors over the past 5 years. In particular, state DOTs have increasingly contracted out preliminary engineering, design, right-of-way, and construction engineering and inspection activities. We also found that state DOTs have increasingly given consultants and contractors more responsibility for project quality through a growing trend to contract out construction inspection and engineering activities. Some state DOTs have used broader contracting types and techniques that give additional responsibility to consultants and contractors. For example, some state DOTs have used consultants to serve on their behalf as project managers or program managers to oversee and manage day-to-day activities on highway projects. On the basis of our survey (see sidebar) and discussions with state officials, we found that states have increased the extent to which they contract out some types of highway activities to consultants and contractors (see fig. 2). Our survey results indicated that over the past 5 years, more than half the states have increased the amount of preliminary engineering, design, and right-of-way activities as well as construction engineering and inspection activities they have contracted out to third parties. A fewer number of states have increased contracting out of maintenance and operations activities. Federl-Aid Eligible Preventive Mintennce: Inclde ctivitie such asvement preervtion, safety improvement, nd eimic retrofit. Right-of-Wy: Inclde ctivitie such asnd pprsa, lnd prchase negotition, nd assnce progr for individua nd business diplced y highwy project. Officials from 27 of the 50 states responding to our survey indicated that their states had increased the contracting out of construction engineering and inspection activities over the past 5 years, although half the states report contracting out 25 percent or less of this work. In our interviews, several states indicated that they have recently had to increase their use of consultants for construction inspection activities. For example, the South Carolina DOT began to increase its use of consultants to perform construction engineering and inspection work in 2000. Department officials estimated that they will contract out about 10 percent of construction engineering and inspection work next year. Prior to 2000, the South Carolina DOT only contracted out construction inspection and engineering work on certain large, complex projects. Maryland State Highway Administration officials also said that they have been giving what have been traditionally in-house construction engineering and inspection activities to consultants, contracting out about 60 percent of these activities. Officials from at least 3 state DOTs we interviewed indicated that they would prefer to keep construction inspection and engineering activities in- house to retain greater control over the quality of contracted work. For example, Illinois highway department officials said that they always assign an Illinois highway department engineer to oversee the consultant because they do not like to have consultants oversee other contractors and consultants, but that they need to contract out inspection activity for projects that require expertise they do not have in-house. The Maryland State Highway Administration officials also said that they would prefer to retain the construction engineering and inspection activities in-house, but they have been unable to hire a sufficient number of staff. According to Utah DOT officials, the agency has been able to avoid contracting out any construction engineering and inspection activities so far, but they would likely contract out such activities in the future if workload burdens on in- house highway department staff continue to increase. Some state DOTs have used certain types of contracts where contractors assume more responsibility and risk for project delivery and day-to-day highway project oversight. For example, design-build contracts allow contractors to be involved in both the design and the construction of a highway project, and project management contracts (1) can assign additional oversight responsibilities to contractors or consultants and (2) can result in contractors overseeing other contractors. Figure 4 shows the number of states using these types of contracts and the frequency with which they use them. As figure 4 shows, more than half of the state DOTs have used a design- build approach at least once, and 20 state DOTs have not let any design- build contracts over the past 5 years. Our survey also indicates that many state DOTs still have constraints on their ability to use design-build contracting. Fifteen state DOTs reported that they do not have authority to enter into design-build contracts, and an additional 10 state DOTs reported that they have only limited design-build authority. Few states have experience with other contracting methods asked about in our survey. Five states reported that they had used project managers for more than 10 contracts, and 3 states reported having used construction managers/general contractors more than 10 times to oversee and manage the day-to-day activities of a project. Our survey also asked about a variety of other contracting techniques that state DOTs may use in an effort to help minimize construction time and cost, such as cost plus time bidding (A+B), incentive and disincentive contracts, and lane rental contracts. Almost two thirds of the states indicated that they used more than one of these contracting techniques at least occasionally. Of the contracting techniques included in the survey, states reported using incentives and disincentives and cost plus time bidding most often over the past 5 years (see fig. 5). While some states have used these contracting techniques in their highway projects, many states reported that they did not use them very often. For example, only 10 states reported using more than 1 technique frequently. Of these states, only 4 reported using more than 3 of these techniques frequently. Two states reported using these tools either rarely or not at all. While our survey results do not indicate widespread use of these different types of contracts and contracting techniques, these results do not indicate that the use of these contracts is not an important or growing trend in state contracting. State officials we interviewed told us that many of these types of contracts, which are relatively new to some state DOTs, are actively being considered and their use is likely to grow in the future. In addition, some techniques are more suited to projects in congested areas—such as lane rental contracts—and some states may have fewer such projects than others. Other contract types, such as design-build contracts, are often used for projects that are large and complex in scope, which may be relatively rare in some states. While many state DOTs have increased their contracting out of various activities over the past 5 years, officials at many highway departments anticipate a slowing of this trend. As figure 6 shows, most state DOTs reported that they expect to maintain their current level of contracting over the next 5 years. For some activities, a number of states even expect to see declines in their level of contracting. For example, 15 state DOTs reported that they expect their contracting of design activities to decrease over the next 5 years. State DOT officials responded in the survey that their expectations for their contracting levels over the next 5 years are based on their expectations for highway program funding levels, legislative considerations, anticipated workload, and staffing levels. For example, the Oregon DOT officials stated in our survey that they expect their funding levels for highway projects to greatly decline by 2010, thereby reducing their need for consultants. However, the department noted that if they are able to secure new funding, they anticipate continuing at their current level of consultant use, which is at a historical peak for the department. Some states anticipated growth in their contracting for certain activities. For example, Pennsylvania and Utah DOT officials responded that they believe their state will increase contracting out work to consultants and contractors for all seven categories of highway activities. In addition, another state official indicated that its state DOT expects to increase its contracting out of federal-aid eligible preventive maintenance work in the next 5 years due to an anticipated shift in its program to focus on system preservation, rather than capital projects. State DOTs indicate that the most important factor in state DOTs’ decision to contract out highway activities is the need to access the manpower and expertise necessary to ensure the timely delivery of their highway program, given in-house resource constraints. While state DOTs consider cost issues when making contracting decisions, cost savings are rarely the deciding factor in contracting decisions, and no state we interviewed regularly performs formal assessments of costs and benefits before deciding whether to contract out work. Several studies have attempted to compare the costs of in-house and contracted work, although limitations in the studies’ methodology make it difficult to conclude that the use of consultants and contractors is more or less expensive than using public employees over the long term. In addition to staffing and cost issues, there are other considerations, such as the desire to maintain in-house expertise that can play a role in a state DOT’s decision of whether to contract out highway activities. In our survey, state DOTs listed “lack of in-house staff” as “very important” or “important” in their decision to contract out work more than any other factor for all seven of the highway activities included in the study, as shown in table 2. Furthermore, all of the highway department officials that we interviewed said that they do not have the in-house staff resources available to deliver their program in a timely manner, so they must contract out work to deliver projects and services. For example, Illinois DOT officials said that at this point, they rely on consultants to fulfill the department’s work demands. In recent years, state DOTs have experienced a substantial growth in funding for their highway programs, without a commensurate increase in staffing levels. Results from our survey show that the majority of state DOTs have experienced constant or declining in-house staffing levels. State DOTs indicated that staff reductions occurred most frequently in the areas of design, construction engineering and inspection, and maintenance, as shown in table 3. Of the 50 states that completed the survey, only 12 highway departments stated that they employ more professional and technical highway staff than they did in the past 5 years. The remainder of the highway departments said that their workforces have either stayed the same or decreased over the last 5 years. Analysis of Census of Governments data also illustrated these trends in staffing at state DOTs. From 1992 to 2005, employment at state DOTs across the country declined by a little over 0.5 percent annually. At the same time, state spending on highways increased by 0.2 percent annually, in real (inflation-adjusted) terms. These trends have resulted in an increase in the amount of highway spending per employee at state DOTs, with each state DOT employee on average having to “manage” a larger amount of his or her state’s program. Overall, across the country, state DOT inflation-adjusted expenditures per employee have grown by 0.75 percent annually from 1992 to 2005. Officials at every state DOT we interviewed also acknowledged challenges in delivering highway infrastructure and services demanded, given their in- house staffing situations. Several of the officials cited budgetary issues and political pressure to reduce the size of government as constraints on their ability to hire additional in-house staff. For example, Illinois DOT officials, whose staff has been cut nearly in half since the 1970s, stated that these staff reductions have been primarily linked to budget issues, such as those associated with the state’s public employee pensions. In South Carolina, the legislature has not substantially changed the highway department’s staffing levels despite the department’s increased program size. Consequently, department officials stated that there is more work to do than the department can handle with its in-house staff alone. Officials from several state DOTs also mentioned that market conditions, including a lack of qualified engineers and the higher salaries paid in the private sector, limit their ability to hire and retain qualified personnel, even when they have the budget authority to do so. In Georgia, DOT officials said that the department is often engaged in bidding wars with private firms for prospective employees, and that they simply do not have the ability to offer equivalent compensation. In addition to supplementing ongoing shortages of in-house staff, many state DOTs viewed contracting as a valuable strategy for managing short- term workload fluctuations. For example, Louisiana Department of Transportation and Development officials said that contracting is beneficial because it provides them with added flexibility and allows them to respond more rapidly to spikes in their highway program than if they had to bring new in-house staff on board. Once work slows, contracting also allows the state DOTs to draw down their workforce without having to lay off in-house employees. In our survey, state DOTs listed the desire to “maintain flexibility or manage variations in department workload” as “very important” or “important” more frequently overall than any other factor except “lack of in-house staff” in their decision to contract out work. In addition to increasing their overall level of manpower, state DOTs also frequently contract out work to access specialized skills or expertise they may not have in-house, according to our survey results and interviews with state highway officials. For example, the Pennsylvania DOT does not always have the specialized skills in-house to do certain geotechnical analyses and environmental impact assessments, so this work is contracted out. Several state DOTs also indicated that they tend to use consultants on complex projects that require more specific expertise. For example, Illinois DOT officials told us that they typically use consultants for larger, more complex projects that generally will have a higher associated dollar amount due to the need for specialized expertise. In addition, Louisiana Department of Transportation and Development officials said that they usually hire consultants to design the more complex and larger projects due to a decrease in design staff as well as in-house expertise. Maryland State Highway Administration officials also indicated that staff reductions in their agency have had a disproportionate effect on positions requiring more experience and has led to the agency using a greater proportion of consultants on large projects. Cost savings do not appear to be an important driver in the trend toward the increased contracting out of highway activities. Of the seven factors listed in the survey that might potentially lead a state DOT to decide to contract out an activity, “to obtain cost savings” was listed as “very important” or “important” the least number of times of any of the factors, across six of the seven highway activities studied. Furthermore, “to obtain cost savings” was listed by states as “of little importance” or “of no importance” the most times of any factor for five of the seven highway activities studied, as table 4 shows. During our interviews, no state DOT official cited cost savings as a primary reason for their departments’ increased use of consultants and contractors in delivering their highway program. The Georgia DOT initially attempted to perform some cost- benefit analyses when the department was going through a surge in its contracting out work; however, the department abandoned these efforts after it became apparent that the results of the analyses did not matter since the department needed to contract out the work regardless. While cost savings are rarely the driver in the decision to contract out highway activities, the perception of higher contracting costs may influence states to continue to perform activities in-house, rather than contracting out the activities. In our survey, state DOTs listed the higher costs of consultants and contractors as a “very important” or “important” factor in the decision to use in-house staff to perform an activity more times overall than all but one factor, as shown in table 5. As an example, officials at the Pennsylvania DOT conducted an evaluation and found that it would be more expensive to contract out for highway line painting and decided to continue to do the majority of this work with in-house staff. Although state DOTs consider cost issues and estimate the costs of performing certain activities, none of the 10 departments from which we interviewed officials had a formal process in place to systematically or regularly assess the costs and benefits of contracting out activities before entering into contracts. State officials we interviewed acknowledged difficulties in accurately comparing costs of work performed in-house and work performed by contractors and consultants. For example, Minnesota DOT officials stated that they have difficulties in determining how to properly calculate overhead rates for in-house staff. Reports from state auditors in several states also acknowledged difficulties in comparing the costs of using consultants versus using in-house staff. Some reports also found that the highway departments in their states did not thoroughly or adequately study costs associated with the use of consultants compared with in-house staff to effectively manage the use of consultants, or actively negotiate with consultants to ensure that contract prices were fair and reasonable. While formal assessments are not undertaken, officials from several state DOTs we interviewed generally perceived contracting out to be more expensive than using in-house staff, particularly for engineering services. In fact, no state DOT official we interviewed perceived engineering work to be cheaper on an hourly basis when contracted out. However, some officials indicated that they found opportunities for cost savings in some circumstances for specific activities. For example, the Utah DOT found that it was cheaper to contract out its pavement management data collection work because it allowed the department to avoid having to invest in the expensive equipment required, which tends to become rapidly outdated. Officials from another state DOT acknowledged that there were potential cost efficiencies through contracting if contract employees were laid off during periods of reduced activity, such as during the winter months. This department conducted an analysis that found that if the agency laid off consultant construction inspectors for at least 3 months out of the year, the agency’s cost for the inspectors would equal that of in- house employees. However, officials stated that the department has not laid off consultant inspectors consistently due to concerns that the department would not be able rehire them once their services were needed again. A number of studies have attempted to compare the costs of contracting out and using in-house staff for highway activities. In our review of these studies, we identified a series of methodological issues and other limitations that make it difficult to make any conclusions about whether consultants and contractors are more or less expensive than public employees over the long term. In addition, we reviewed other studies that have attempted to synthesize the results of existing cost comparisons and have raised many of these same issues. First, numerous challenges exist in obtaining accurate and reliable data to make comparisons. Such challenges include difficulties in properly assigning in-house overhead costs to specific projects and activities, finding “like” projects to compare, and using state DOT systems and records that have incomplete and unreliable data. Second, very few of the studies we reviewed sought to systematically determine the benefits resulting from contracted work or in-house work, thus providing an incomplete picture as to the extent to which contracting out highway activities might or might not be desirable. For example, additional costs of using consultants or contractors could be offset by benefits in completing the project more quickly than it would have been done by in-house staff, or the quality of the work may be worth the premium paid for the service. Finally, the studies did not adequately consider the long-term implications of contracting out work or performing it in-house, such as long-term pension obligations associated with in-house employees that are not incurred when work is directly contracted out. In addition to the factors that we have previously discussed, other considerations can play a role in a state DOT’s decision of whether to contract out certain activities. Next to the staffing issues that we have previously discussed, state DOTs most frequently reported using consultants to meet specific time frames or to increase the speed of completion of a task as an “important” or “very important” factor. State- level legislative requirements and policy mandates are also sometimes factors in state DOTs’ decisions to contract out work. For instance, the South Carolina Legislature enacted a budget provision in 1996, encouraging the highway department to use private contractors for bridge replacements; surface treatments; thermo-plastic striping; traffic signals; fencing; and guardrails, whenever possible. In our survey, the Alaska DOT responded that one of the reasons it contracts out preliminary engineering work is to satisfy direction that it has received from the state government on using consultants. Conversely some states may also have legislative limitations on their ability to contract out work. For example, the California DOT, until recently, had only limited authority to contract out engineering services under the California constitution. Regarding the decision to keep work in-house, the most commonly cited factor in both our interviews and our survey was the desire to retain key skills and expertise. State DOTs recognize that they need to maintain a core of employees with sufficient experience and expertise to be able to effectively oversee and manage consultants and contractors and to also develop the expertise of more junior highway department employees. In both our interviews and our survey, State DOT officials stated that they often consciously keep certain activities in-house so that employees can improve their skills. The results from our survey indicated that state DOTs’ perceptions regarding differences in quality between work performed in-house and work contracted out may at times be an important factor in decisions to keep work in-house. For preconstruction activities in particular, “belief that work will be of a higher quality if performed by in-house staff” was one of the factors most frequently listed as being “very important” or important” in the decision to perform work with in-house staff. Furthermore, one state DOT noted in the survey that the consultants they have used to perform construction engineering and inspection work did not have adequate experience to effectively do the job. In our interviews, few state officials expressed strong beliefs about differences in quality between in-house and contracted out work, although some departments acknowledged that the quality of work varies, depending on the firm being used, and that there have been issues regarding the performance of specific firms. We also performed a correlation analysis to determine whether the amount of work that state DOTs contract out is associated with certain demographic or economic conditions in the state. The level of correlation between most of the economic and demographic variables selected for the analysis and the percentage of work that state DOTs contract out was relatively weak or nonexistent. However, among the variables that we considered, the percentage of a state’s population living in urban areas had the strongest positive correlation with the amount of work that states contract out in preliminary engineering, design, and construction engineering and inspection activities. This correlation may occur because, as state DOT officials told us, they are more likely to contract out larger and more complicated projects, and there may be more of these types of projects in those states that are more urbanized. Also, for the majority of activities studied, there appears to be a moderate positive correlation between the amount of work contracted out and the pace at which states’ populations have grown. This correlation is consistent with the possibility that more rapidly growing states contract out greater amounts of work to help meet surges in their workload spurred by the increased demand for highways that growing populations foster, but may also be due to other factors. State officials we interviewed told us that they have sufficient tools and procedures in place to monitor and oversee contractors to ensure that the public interest is protected. These tools and procedures include such things as prequalification of contractors and consultants, regular monitoring procedures, assessments of work performed, and standards and requirements for certain types of work. However, 10 of the 11 state auditor reports we reviewed found weaknesses in state DOTs’ contracting and oversight practices. With current trends in contracting state DOTs face additional challenges in conducting adequate oversight and monitoring. In particular, states’ oversight has generally become further removed from the day-to-day work on a project, and state officials expressed long-term concerns in retaining adequate expertise and staff needed to adequately oversee a growing contractor and consultant workforce. State DOTs’ contracts with consultants and contractors include a variety of mechanisms and controls that are intended to address potential project risks and protect the public interest, and the state officials with whom we spoke believe that the controls they have in place are adequate to protect the public interest. For example, state DOTs may prequalify consulting firms and contractors to ensure that those bidding on projects will be able to successfully perform contracted activities. A previous survey on state contracting practices found that state DOTs use a prequalification process for about two thirds of the activities they contract out. The survey found that prequalification processes were most common for design, right-of- way, and operations activities, while prequalification processes where less common when contracting out for maintenance and construction work. A majority of state DOT officials we interviewed also stated that they have prequalification processes for at least some activities. As part of their prequalification requirements, state DOTs examine consultants’ and contractors’ previous job experience and work capacity to identify individuals and organizations from which the agency may accept a bid. In addition, for engineering services, state DOTs are required to use a qualification-based selection process to identify best-qualified bidders. It is only once these best-qualified bidders have been identified that the highway department enters into price negotiations to determine a “fair and reasonable” price for the contracted services. States also report that they have policies to regularly monitor and assess consultants and contractors during the project and upon project completion and may include these assessments into prequalification determinations for future projects. State officials indicate that a state employee is always ultimately responsible for any particular project or service and, therefore, are responsible for ensuring that consultants and contractors are performing the work according to contract provisions and other applicable standards and specifications. State DOTs may address poor performance on an ongoing project by requesting that the contractor or consultant replace a particular employee or by requesting that the contractor or consultant address any construction mistakes. In extreme circumstances, state DOTs can also withhold payment to consultants or contractors. A poor performance rating at the end of a highway project may result in a reduced chance of securing future contracts. All state DOTs have policies and rules governing consultant and contractor independence. For projects on the National Highway System, state DOTs require consultants and contractors to certify that they do not have any potential or perceived conflicts of interest. Some state DOTs have prohibitions against performing both design and construction inspection activities. State DOTs have also developed various standards, specifications, and policies to help ensure that the public interest is protected on highway projects. State DOTs require that standards and specifications be followed whether work is performed by department staff or contracted out. When work is contracted out, state DOTs outline all relevant standards and specifications—such as design and construction standards, and specifications regarding materials acquisitions—in the terms of the contract after a winning bidder has been selected. Finally, federal regulations require each state agency to have an approved quality assurance program for materials used in and the construction of federal-aid highway construction projects. Quality assurance programs identify contractors’ materials sampling, testing, and inspection requirements as well as specific quality characteristics to be measured for project acceptance. The regulations also include requirements that each state DOT’s quality assurance program provides for an acceptance program and an independent assurance program. In 1995, FHWA revised its regulations to allow state DOTs to use contractor material testing data in their acceptance decisions if accompanied by validation and verification procedures. However, state employees must always make the final acceptance decision. On full oversight projects, the state’s FHWA division office is responsible for providing final acceptance of projects at the completion of construction, but the state is still responsible for providing project-level acceptance of construction and materials quality during construction. State auditors in 10 of the 11 states that responded to our inquiry found numerous weaknesses in state DOTs’ contracting and oversight practices. For example, one auditor’s report found that the state DOT’s prequalification procedures do not always ensure that the most qualified bidder is selected. Furthermore, auditors’ reports in at least 5 states found that the state DOTs did not aggressively negotiate fair and reasonable prices when using qualifications-based selections, or had not established criteria to define what constitutes a reasonable price, resulting in negotiated prices that are perceived to be too high compared with national benchmarks, or compared with other states’ experience. In addition, another auditor report found examples where the state DOT failed to consistently assess consultant and contractor performance, and examples where quality assurance procedures were not adequately followed, which can result in lower-quality highway construction. State DOTs may encounter challenges in conducting sufficient oversight and monitoring for highway projects, given current trends in contracting out. For projects using federal-aid highway funds, FHWA requires that a state highway employee always have ultimate responsibility for successful project completion. However, when consultants and contractors have oversight or managerial roles on a project, the state highway employee may be further removed from the day-to-day project activities. This situation has the potential to limit the ability of state DOT employees to identify and resolve problems that occur during construction. For example, the National Transportation Safety Board—in its report on an accident in Colorado in which a car collided with a steel girder that had fallen from an overpass—found that the state DOT did not conduct active oversight, and that it was the department’s policy to avoid telling a contractor how to accomplish contracted work and to avoid interfering as the contractor carried out the work. In addition, state highway employees are increasingly moving into project manager roles in which they may oversee several projects. Several state DOT officials cited concerns and challenges in conducting adequate oversight in such situations. In some states, consultants oversee multiple projects as well. For example, the Maryland State Highway Administration is beginning to use construction management inspection contracts. Under these contracts, the contractor becomes responsible for managing work on specific projects as well as a portfolio of projects. Erosion of state DOTs’ in-house expertise as a result of staff cuts and retirements also creates additional risk in the long term and creates challenges for state DOTs in effectively overseeing consultant and contractor work. All of the state DOT officials with whom we spoke believe that they currently have sufficient expertise in-house to carry out their highway programs and to oversee consultants and contractors. However, according to officials at several state DOTs, there is a “thinning” of expertise in their departments and fewer knowledgeable staff are available to oversee and monitor consultants. As we have previously stated, state DOTs have not been able to hire a sufficient number of staff to replace experienced staff who may soon be retiring. In addition, state DOTs compete with private firms for what in some states is a relatively small number of new engineers graduating from college. State highway officials in several states also commented that, given the limitations inherent in a state budget, college graduates often elect to either (1) go into the private sector right away or (2) receive training at the state DOT, and then leave for a higher paying job in the private sector. Ensuring that consultants and contractors are independent and free from conflicts of interest can be difficult. As we have previously discussed, state DOTs are using consultants and contractors for a greater variety of services, including project engineering and design, construction inspection, and highway maintenance. Officials from several state DOTs have expressed some concern because consultants and contractors may work on multiple state projects where they are the lead on one project and a subconsultant/subcontractor on another project. For example, one firm may have an undisclosed financial relationship with another firm beyond the work being done with the state DOT, and this situation could pose difficulties if one of these firms is hired to inspect the other. While some state DOT officials acknowledged that situations have arisen that present the potential for conflicts of interest, none of the state DOT officials with whom we spoke thought their agencies had any significant problems with contractor and consultant independence. The federal-aid highway program provides states with broad flexibility in deciding how to use their funds, which projects to pick, and how to implement these projects; therefore, FHWA has a limited role in determining how consultants and contractors should be used by state DOTs. FHWA performs project-level oversight on only a limited number of projects. FHWA division offices also conduct reviews of state programs and processes that are related to the use of consultants and contractors. These oversight activities are generally limited to ensuring compliance with federal rules and regulations. On a national level, FHWA has recently conducted some reviews that touch on states’ use of consultants and contractors. Through these reviews, FHWA has identified a variety of risks associated with the use of consultants and contractors, but the agency has not fully assessed how to respond to these risks. FHWA has only limited authority over many aspects of state DOTs’ programs, including their contracting practices. According to FHWA officials, the agency does not have any specific policy regarding highway departments’ use of consultants and contractors beyond those requirements contained in existing laws and regulations. Furthermore, while federal law requires state highway departments to be “suitably equipped and organized,” the law also includes a provision that a state may engage, to the extent necessary or desirable, the services of private engineering firms in meeting these provisions. According to FHWA, some FHWA division offices have interpreted this regulation as providing state DOTs with broad authority to use consultants to perform department work. FHWA has compiled relevant legislation and regulations regarding the contracting out of highway activities on its Web site to serve as guidance to state DOTs when contracting out highway activities. FHWA has also played a role in encouraging states to consider alternative contracting techniques and methods, and to consider greater involvement from the private sector through public-private partnerships to improve project delivery and seek out alternative sources of funding. For example, FHWA has encouraged contracting techniques and public-private partnerships through Special Experimental Projects 14 and 15, with many of these techniques allowing consultants and contractors to assume additional responsibilities in the delivery of highway projects. While state DOTs conduct project-level oversight on the majority of highway projects, FHWA retains project-level oversight on a limited number of projects, based on its stewardship agreement with the state DOT. Regarding states’ use of consultants and contractors, the agency’s oversight efforts are generally focused on ensuring compliance with existing laws and regulations. For example, the division office must concur in the award of certain contracts, and when providing concurrence for an engineering contract, a division office will seek to ensure that the state DOT has used an appropriate qualifications-based selection process, as required by law. When conducting project-level oversight, division office officials will also do at least some on-site monitoring of the work. During these on-site visits, FHWA will assess the project’s status and verify that the project complies with plans and specifications. As part of this process, division office officials told us that they will often observe ongoing project activities to ensure that materials testing and other quality control and quality assurance procedures follow regulations. The amount of on-site oversight varies greatly, depending on the perceived project risk, which is generally determined according to the cost of the project, its complexity, and its visibility to and potential impact on the public. Division office officials told us that on projects with very high visibility, they will have an engineer on- site up to several times a week. However, for other projects, they may not send an engineer out to the site more than once or twice over the life of a project. According to division office officials, even when conducting project-level oversight, they still rely on the state DOTs to properly administer the project and that much of FHWA’s role is not to perform direct oversight, but rather to make sure that the highway department is doing appropriate oversight. Once the project is completed, FHWA is responsible for final inspection and project acceptance. FHWA also conducts oversight related to the use of consultants and contractors through reviews of state programs and processes that may involve consultants and contractors. To identify those areas that pose the greatest threats or opportunities to states’ federal-aid programs and to assist the division offices in allocating their limited resources in the most effective manner, FHWA has encouraged a risk-based approach to identifying areas for review, and given division offices flexibility in determining which program areas to focus on in their risk assessments. Through this risk assessment process, many division offices have identified issues related to the use of consultants and contractors. We have identified at least 15 states where FHWA division offices have conducted process reviews specifically concerning the contracting out of work over the past 5 years. We have also identified at least 2 other states where FHWA division offices are currently conducting similar reviews. These reviews focus on a variety of issues related to the use of consultants and contractors, and many have recommendations for how state DOTs can improve their processes for procuring and administering consultants and contractors. As a result of division offices’ identification of the use of consultants and contractors as an area of high risk, FHWA headquarters has also conducted national reviews that involve issues related to this matter. Under its recently created National Review Program, FHWA has completed reports on quality assurance and oversight of local public agencies that include discussions of issues associated with the contracting out of work. FHWA is also currently undertaking an additional review that is looking at the administration of consultant contracts. In addition to these reviews, FHWA has also conducted a series of annual reviews of state DOTs’ quality assurance activities over the last several years that have highlighted concerns related to material testing conducted by consultants. A final way that FHWA exercises oversight relating to the use of consultants and contractors is through its approval of various state DOT documents. As part of their oversight responsibilities, division offices are responsible for approving a variety of state DOT manuals, standards, and policy documents that establish procedures for implementing the federal- aid highway program in the state. For example, state DOTs must develop written procedures outlining their process for procuring consultant services, which must be approved by FHWA. FHWA must also approve other documents that may not be directly focused on the contracting out of work, but that address work that is often performed by consultants or contractors. For example, division offices are responsible for approving state DOTs’ quality assurance programs for materials on construction projects. FHWA has identified many ways that the contracting out of work can pose risks to the federal-aid highway program. For example, a series of FHWA reviews of quality assurance activities found many critical deficiencies in state oversight of consultants in these activities, such as the lack of independent sampling of highway materials for verification tests; inadequate statistical comparisons of test results; and insufficient state control of test samples, sampling locations, and testing data. Such shortcomings in state DOTs’ quality assurance programs could potentially have a detrimental effect. For example, in its quality assurance review, FHWA states that pavement on highways is deteriorating faster than expected and asserts that this is likely, at least in part, due to the identified weaknesses in state DOTs’ quality assurance programs. In addition, another national FHWA study related to the use of local public agencies found that local agencies are often highly dependent on consultants to deliver the projects and may not have the expertise to adequately oversee the work of the consultants and to be sure of the quality of the services they are getting. The study further found that some states may not be conducting adequate oversight over these projects, and that the states’ reviews tend to be reactive, rather than proactive. Division offices have also cited areas of risk associated with the growing use of consultants. For example, an Illinois Division Office process review raised concerns about the possibility that firms that had performed design work for a project might also do construction inspection work on the same project, which would pose the potential for conflicts of interest. In our interviews with division office officials, many cited the challenges that contracting out poses for state DOTs in regard to maintaining sufficient in- house expertise. Also, several division office officials perceived contracting out work to be more expensive than keeping the work in- house, resulting in an inefficient use of public funds. Division office officials we interviewed also pointed out that FHWA’s division offices have also suffered reductions in staff and an erosion of expertise and experience, which can hamper their oversight activities. FHWA officials stated that many division offices also identified areas of risk related to the contracting out of work during FHWA’s first national risk management cycle. Although the use of consultants and contractors was not one of the 49 key elements that division offices were required to assess, many division offices still identified it as an area of risk. According to FHWA, 23 division offices identified risks related to the use of consultants as one of their top risks, with division offices finding such risks present throughout various state DOT program areas, including in construction, design, and right-of-way. These risks included concerns that consultants do not have the necessary skills to complete tasks according to federal regulations, consultants are not supplying sufficient personnel or resources to complete jobs, and state DOTs have been overly relying on consultants to select and manage contractors. While FHWA has identified risks associated with the use of consultants and contractors, the agency has not comprehensively assessed how, if at all, it needs to adjust its oversight efforts to protect the public interest, given current trends in the use of consultants and contractors. Also, FHWA has not instructed its division offices to consider issues related to the amount and type of work contracted out when outlining oversight responsibilities in their stewardship agreements with state DOTs. Overall, FHWA division offices generally described their role as ensuring compliance with existing regulations and not assessing the performance of state DOTs in achieving transportation goals. This has the potential to limit the value of the agency’s oversight activities. For example, FHWA acknowledges in its report on quality assurance in materials and construction that it is possible to have a quality assurance program for materials that is compliant with regulations, but is not performing effectively, and vice versa. This FHWA report also finds that division offices are often not fully aware of what components should be part of quality assurance programs, and, as a result, the effectiveness of these programs is not being adequately assessed. FHWA has made progress in addressing some of the concerns related to its oversight program and is considering additional steps to mitigate risks associated with the use of consultants and contractors in the future. The agency is currently developing an implementation plan in response to the recommendations in its quality assurance report. This plan may seek to address some of the risks associated with the involvement of consultants and contractors in the quality assurance process. Also, FHWA is continuing to refine its risk management approach to better identify risks throughout the country and to more fully develop methods for addressing identified risks. Finally, as we have previously discussed, FHWA division offices have been working to revise their current stewardship agreements to incorporate further considerations of risk and to also identify performance measures that will assist in increasing accountability in the federal-aid program, based on FHWA guidance. However, FHWA guidance gives state DOTs and division offices broad flexibility in how they assess risks and develop performance measures. As of October 2007, FHWA reported that 21 of the agreements had been revised, with 15 of them incorporating considerations of risk and performance measures. Five more agreements incorporated considerations of risk, but not performance measures. State DOTs have long used contractors and consultants to augment existing workforces. Recent trends suggest that consultants and contractors are used more than ever before and in a multitude of different activities—from designing projects, to appraising and acquiring rights-of- way, to managing and inspecting projects—and, in some cases, consultants and contractors may be responsible for projects from beginning to end. While there is no conclusive evidence of the long-term differences in costs and benefits between using consultants and contractors and obtaining additional state staff, this consideration is largely inconsequential to state DOTs because many are now dependent on consultants and contractors to deliver their growing highway programs. Given this reality, effective oversight and monitoring of consultant and contractor workforces become critical to state DOTs to ensure that work is performed according to standards and specifications, and that materials used meet quality and performance standards. While the state officials that we interviewed generally believe they have sufficient controls in place to conduct such oversight, there is some evidence from state auditor’s reports that these controls are not always implemented effectively. Furthermore, state officials we interviewed recognize that there will be increased risk to the highway program over the long term, given (1) the growing potential for conflicts of interest and independence issues and (2) the reality of a changing workforce at state DOTs and difficulties in attracting and retaining staff with key skills. We have previously reported that there is a need for a fundamental reexamination of the highway program and a need for national transportation goals to be better defined and linked to performance measures to evaluate what the respective programs actually accomplish. Regarding the growing use of consultants and contractors by state highway departments, FHWA’s oversight has generally been focused on ensuring that state processes related to this matter are in compliance with existing regulations, and has not sufficiently focused on the performance and effectiveness of those processes in protecting the public interest or in achieving national transportation goals. We recognize that FHWA has a number of efforts under way that are geared toward refining FHWA’s approach to oversight of state DOTs, including developing a plan to address the issues raised in its national review of quality assurance programs, working to identify areas of vulnerability to the federal-aid highway program through its national risk management cycle, and continuing a national program review currently under way of consultant administration. In addition, division offices are continuing to revise their stewardship agreements to be more risk- and performance-oriented. However, further efforts to assess how best FHWA could adjust its oversight and focus its activities on consistently ensuring the performance and effectiveness of state DOTs’ programs and processes as they relate to the management of consultants and contractors would increase the value of FHWA oversight in this area. In addition, while several stewardship agreements have recently been revised to incorporate a more risk- and performance-oriented approach to conducting federal oversight, most states have yet to revise their agreements, and some revised agreements have not incorporated performance measures. To more effectively and consistently ensure that state DOTs are adequately protecting public interests in the highway program, given current trends in the use of consultants and contractors, we recommend that the Secretary of Transportation direct the Administrator of the Federal Highway Administration, in the context of FHWA’s ongoing activities related to quality assurance programs and risk management, to work with FHWA division offices to (1) give appropriate consideration to the identified areas of risk related to the increased use of consultants and contractors as division offices work to target their oversight activities and (2) develop and implement performance measures to better assess the effectiveness of state DOTs’ controls related to the use of consultants and contractors to better ensure that the public interest is protected. We provided copies of this report to the Department of Transportation, including FHWA, for its review and comment. DOT officials provided technical clarifications, which we incorporated as appropriate. The department took no position on our recommendation to work with FHWA division offices regarding state DOTs’ increased use of consultants and contractors. We are sending copies of this report to interested congressional committees, the Secretary of Transportation, and the Administrator of the Federal Highway Administration. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or at heckerj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. This report addresses the following objectives: (1) the recent trends in the contracting out of state highway activities; (2) the factors that influence state departments of transportation (state DOT) in deciding whether to contract out activities and the extent to which state DOTs assess costs and benefits when making such decisions; (3) how state DOTs protect the public interest when work is contracted out, particularly when consultants and contractors are given substantial responsibility for project and service quality and delivery; and (4) the Federal Highway Administration’s (FHWA) role in ensuring that states protect the public interest. To determine the recent trends in the contracting out of state highway activities, we performed a literature review of existing research and survey data to identify general trends over the periods covered by those surveys and to use as a general baseline for comparison with current levels of contracting out. We also surveyed and received responses from all 50 state DOTs, using a Web-based questionnaire. In developing the survey, we consulted a representative from the American Association of State Highway Transportation Officials (AASHTO) and also consulted a highway expert who is a former President of AASHTO, a former head of the Utah DOT, and an author of numerous studies on highway contracting issues. On the basis of the information received in these consultations, we revised our survey instrument. In addition, we conducted survey pretests over the telephone with state DOTs in Illinois and Maryland. We also revised our survey instrument on the basis of information we received in these pretests. We conducted the survey from mid-June to mid-September 2007. During this period, we sent 2 rounds of follow-up e-mails to nonrespondents in addition to the initial e-mailing. We also made follow- up telephone calls and sent follow-up e-mails to several state DOTs to encourage them to complete the questionnaire. We then surveyed the state DOTs to learn about the extent to which they contract for services across 7 categories of highway activities, including preliminary engineering, design, construction engineering and inspection, federal-aid eligible preventive maintenance, routine maintenance activities not eligible for federal-aid program funding, ongoing operations, and right-of-way appraisals. We also surveyed state DOTs to determine how the levels of contracting for these activities have changed over the past 5 years and to gather information about potential future trends in contracting. In addition, we used the survey to identify which factors state DOTs said are driving them to contract out activities or to keep work in-house. Finally, the survey gathered data on state DOTs’ use of alternative contract types and techniques and collected information on certain contracting concerns that are specific to design-build contracts. In developing the questionnaire and in collecting and analyzing the data, we took steps to minimize errors that could occur during those stages of the survey process. The detailed survey results are available in appendix III. Because this was not a sample survey, it has no sampling errors. However, the practical difficulties of conducting any survey may introduce errors, commonly referred to as “nonsampling” errors. For example, difficulties in interpreting a particular question, making sources of information available to respondents, entering data into a database, or analyzing these data can introduce unwanted variability into the survey results. We took steps in developing the questionnaire, collecting the data, and analyzing the data to minimize such nonsampling errors. For example, social science survey specialists designed the questionnaire in collaboration with GAO staff who have subject matter expertise. Then, as we have previously noted, our questionnaire was reviewed by experts in this field and was pretested in 2 states. When we analyzed the data, an independent analyst checked all computer programs. Since this was a Web-based survey, respondents entered their answers directly into the electronic questionnaire— eliminating the need to key data into a database and further minimizing errors. To gather further information on the recent trends in the contracting out of state highway activities, we performed a series of in-depth interviews with highway department officials in 10 states throughout the country: Arizona, California, Georgia, Illinois, Louisiana, Maryland, Minnesota, Pennsylvania, South Carolina, and Utah. These interviews allowed the team to gather in-depth and contextual information on state DOT contracting practices that could not be obtained through a survey. We conducted all of the interviews using a data collection instrument that we developed. In selecting state DOTs to interview, we used a nongeneralizable sample, rather than performing random sampling. We chose this approach to ensure that the sample set included state DOTs with a range of contracting experiences and practices. When selecting which state DOTs to include in the sample, we considered a range of criteria, including (1) the region in which the state is located; (2) the degree to which the state DOT contracts out highway activities; (3) the range of contracting approaches the state uses, including nontraditional project delivery methods such as design- build or asset management as reported in previous reports; (4) the legal and policy requirements the state faces in regard to contracting out highway activities; and (5) the extent to which the state has performed analyses of the costs and benefits of contracting out highway activities. To select the states for the sample, we reviewed relevant academic, expert, state, and federal research and existing survey data on state outsourcing activities to make an initial assessment of where various state DOTs fell along the spectrum for each of the criteria and to identify any unique features of the states’ outsourcing programs that would be particularly useful to study in greater depth. For example, we looked for criteria such as state DOTs that had developed unique contracting practices, state DOTs that were rapidly changing the way their departments conducted business, and state DOTs whose outsourcing experiences had been particularly successful or problematic. Lastly, we generally sought to avoid selecting states that had already been studied in great depth and whose contracting experiences are already well-documented, such as Florida. To determine the factors that influence state DOTs in deciding whether to contract out activities and the extent to which state DOTs assess costs and benefits when making such decisions, we used state DOTs’ responses from our survey regarding the importance of various factors in their decisions to contract out various highway activities and in their decisions to continue to perform work with in-house staff. In addition, we relied on information gathered in our in-depth interviews to further determine the importance of various factors in contracting decisions and to gain important contextual information on these various factors that could not be achieved through the survey. We also reviewed the literature to identify existing studies that sought to consider the costs and benefits of contracting out highway activities versus performing them with in-house staff, and we compiled and summarized the results from various studies. We also identified methodological limitations associated with such studies and the potential impacts they have on the reliability of any findings. To determine whether states’ decisions to contract out highway activities were associated with certain demographic or economic conditions in each state, we conducted a correlation analysis. For the analysis, we used data from our survey on the percentage of work that state DOTs contract out for 7 types of activities. Although all 50 states completed the survey, some states did not provide values for all activities. The number of states that provided values ranges from 39 to 46, depending on the activity. We then identified a series of state characteristics to test whether they are associated with the extent to which states contract out these activities. These variables included population, population density, population growth over the past 5 and 10 years, the percentage of a state’s population living in urban areas, annual vehicle miles traveled in the state, annual vehicle miles traveled per person in the state, total lane miles per person in the state, the number of road miles with a pavement international roughness index score greater than 170 (a measure of pavement quality, with a score greater than 170 indicating pavement of poor quality) per person in the state, state per capita income, state pension fund liabilities per person, state highway capital outlays per person, and the change in state highway capital outlays over the past 5 and 10 years. We selected these variables because we could identify plausible reasons that states with higher values of these variables might be either more or less likely than states with lower values to contract out highway activities. We identified reasons that each of these variables could impact either highway demand or supply conditions in a state, or could impact the state’s ability to conduct highway activities with an in-house workforce. Data on these various state characteristics were compiled from the U.S. Census, FHWA, and the Public Fund Survey. We then calculated the correlation coefficients for the 98 relationships to be tested and analyzed the results to see if there were any clear positive or negative associations among the variables and to assess the strength of such associations, as shown in table 6. We did not, however, analyze the associations among these variables in a multivariable analysis because of the lack of a strong conceptual framework based in economic theory for determining an appropriate model. Given this, our analysis only considered the percentage of work contracted out singly with each economic or demographic characteristic selected and did not control for the effects of other characteristics on contracting levels. Multivariable analysis might have revealed more complex relationships among the state characteristics and between those characteristics and the level of contracting out highway activities. To determine how state DOTs protect the public interest when work is contracted out, particularly when consultants and contractors are given substantial responsibility for project and service quality and delivery, we used information from our in-depth interviews with the state DOTs. In our interviews with the state DOTs, we gathered information regarding the manner in which state DOTs define and determine the key interests of the public. We asked state DOTs about the various controls they put in place throughout the highway delivery process to ensure that the public interest is protected when work is contracted out. Along with this, we asked about prequalification procedures, bidding processes, the oversight and monitoring of consultants and contractors while work is being performed, and quality assurance programs, among other things. We also conducted interviews with industry stakeholders from six different organizations knowledgeable about the outsourcing of highway activities to obtain additional perspectives on how state DOTs seek to protect the public interest. In addition, we used state DOT responses from our survey to identify various alternative contract types and techniques that states use to achieve desired outcomes, such as time or cost savings, and to determine how frequently state DOTs use such techniques. Finally, we sent out a request to auditing agencies in all states for any reports available on the contracting out practices of state DOTs and reviewed additional reports discussed in the literature. We reviewed reports from 11 states that addressed their state DOTs’ use of consultants and contractors. To determine FHWA’s role in ensuring that states protect the public interest, we reviewed applicable federal laws and regulations as well as FHWA policy and guidance documents. We also interviewed FHWA officials at the national level as well as at 10 division offices corresponding to the 10 state DOTs we selected for in-depth interviews. FHWA headquarters offices we met with include the following: the Office of Infrastructure, the Office of Asset Management, the Office of Professional and Corporate Development, the Office of Program Administration, and the Office of Planning, Environment and Realty. In addition, we reviewed program and process reviews from FHWA’s national and division offices to identify key areas of oversight focus and key findings that have been reached in such reviews regarding state contracting procedures and quality assurance procedures. For this report, we limited the scope of our review to contracts where firms are paid to provide a service related to highway infrastructure. Although essentially contractual relationships, we did not include public- private partnerships—where a firm takes effective ownership of a facility and assumes control over it, usually for an extended period—in the scope of our work. We conducted this performance audit from December 2006 through January 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In our research, we identified a variety of studies that seek to compare the costs of performing highway activities with in-house staff versus contracting out the work. A variety of parties have conducted such studies, including highway departments, state auditing agencies, academics, industry groups, and employee unions. Some studies focused on one particular state, while other studies considered a range of states’ experiences. Of the studies we identified, engineering activities (design, construction engineering and inspection, or both) were the most common focus, although we also reviewed several studies that examined the contracting out of maintenance activities. A few studies examined a range of activities within a state DOT’s highway program. While some studies sought to do their own analyses, many simply reviewed and summarized other analyses that have been performed. In addition, several of the studies focused on the methodological challenges faced in conducting cost comparisons and sought to suggest ways that such studies could more effectively be structured, rather than actually performing any of their own cost comparison analyses. Of the studies we reviewed, almost all that considered engineering activities found contracting out to be more expensive. Generally, studies attributed this extra expense to higher salaries paid by private firms, higher overhead costs for private firms, private firms’ need to earn a profit, and highway department contract administration costs. Among those studies that performed separate analyses for different types of engineering work, there was some indication that cost differentials may vary, with consultant and in-house costs being more comparable for certain types of engineering activities. For example, one study by PriceWaterhouseCoopers for the Texas DOT found that of the 13 design activities it considered, consultants were more expensive for 8 of these activities. The results were inconclusive for the other 5 activities. Among those 8 activities where consultants were found to be more expensive, the degree to which they were found to be more expensive varied from 27 to 97 percent, depending on the activity. Findings as to the degree by which consultants were more expensive than in-house staff also varied significantly amongst studies. For example, 1 study that reviewed 16 other engineering cost studies found that of the studies that found consultants to be more expensive, consultants were found to be anywhere from less than 16 percent to over 100 percent more expensive. We reviewed only two studies that found that engineering consultants were less expensive than using in-house employees. The first study, which was performed by the state auditing agency in Alaska, found consultants to be on average 24 percent less costly. The second study was performed by the Wisconsin Department of Administration and sought to rebut findings in an earlier Wisconsin highway department study that had found consultants to be more expensive. We also have identified one other study discussed in the literature that found that the cost of professional engineering services, as a percentage of total construction costs, declined as the proportion of engineering work contracted out increased. A few studies also found either that there were no significant differences in costs between in-house and consultant performed engineering work, or that existing data limitations and difficulties in developing appropriate methodologies made the accuracy of cost results questionable. Among those studies that examined differences in costs between in-house and contracted out maintenance work, the picture was more mixed than for engineering activities, with some studies indicating the potential for cost savings through the contracting out of maintenance activities in at least some situations. Studies cited various reasons why contracting out maintenance work could potentially result in cost savings, including the reduced need for state DOTs to make capital investments in expensive equipment, added flexibility for the highway departments to reduce staffing during slow periods (such as the winter), and the increased competition generated by contracting out the work. Studies that identified cost increases associated with the contracting out of maintenance work pointed to difficulties in administering contracts and monitoring performance, to the lack of information to effectively negotiate prices, and to cost escalation after work is privatized. We identified a series of methodological issues and other limitations that make accurate cost comparisons difficult and potentially impact the reliability of these studies’ findings. One of the most problematic aspects of comparing in-house and consultant costs is establishing an appropriate overhead rate for in-house work. State DOTs’ accounting systems are often not set up in such a manner that they accurately capture all relevant overhead costs and appropriately apportion them amongst individual projects or functional units in a highway department. Also, data on in- house costs are often incomplete or unreliable. For example, in-house staff may not accurately bill time spent on a specific project, thereby distorting in-house costs for that project. Many studies also leave out costs that may be relevant, such as state insurance costs. There are also other life-cycle costs, such as the pension costs associated with additional public employees that are difficult to quantify and not considered in most studies. Another problem encountered, is that many studies seek to identify “like” projects and compare the costs of those performed by in- house employees and those performed by consultants or contractors. No two projects are the same, however, and it is often difficult to isolate other variables that may have impacted costs. A final weakness with the studies that we reviewed is that very few of them sought to systematically determine the benefits of performing work in-house versus contracting it out, thereby providing an incomplete picture of the extent to which contracting out highway activities might or might not be desirable. Some of the studies did use testimonial evidence gathered through either surveys or interviews to attempt to make some assessments of differences in quality, depending on whether work was performed by in-house staff or contracted out. Of those studies, the majority found that quality did not vary significantly depending on whether the work was contracted out or performed in-house. Some studies also provide anecdotal information on some potential benefits or problems with contracting out work. Only one study that we reviewed sought to quantitatively assess differences in quality between in-house and consultant performed work. This study, performed by the state auditing agency in Alaska, compared the number of change orders on construction projects that had been designed by either in-house staff or consultants and the average costs of such change orders. Using this metric, the auditing agency found in-house performed design work to be of a higher quality. Given that the majority of the state DOTs with we whom we met told us that they tend to contract out larger, more complicated projects, or those requiring certain types of expertise not possessed in-house, relying simply on comparisons of cost may not be appropriate. If consultants are working on larger, more complicated projects, it is reasonable to expect that the costs of these activities, such as design work, may be higher. Also, it is not unreasonable to anticipate that a premium would be paid for specialized expertise. In addition, none of the studies sought to systematically quantify whether there are any time savings associated with contracting out work and what the value of such time savings would be for road users. This appendix presents selected results of GAO’s Web-based survey of state DOTs (see tables 7 to 26). The purpose of this survey was to gather information from the state DOTs about recent trends in their contracting out of state highway activities. We surveyed the state DOTs about the extent to which they contract for services across 7 categories of highway activities, including preliminary engineering, design, construction engineering and inspection, federal-aid eligible preventive maintenance, routine maintenance not eligible for federal-aid program funding, ongoing operations, and right-of-way. We also surveyed state DOTs to determine how the levels of contracting for these activities have changed over the past 5 years and to gather information about potential future trends in contracting. In addition, we used the survey to identify which factors state DOTs said are driving them to contract out activities or to keep work in- house. Finally, the survey gathered data on state DOTs’ use of alternative contract types and techniques and collected information on certain contracting concerns that are specific to design-build contracts. We sent this survey to the 50 state DOTs. We received 50 completed surveys for a response rate of 100 percent. However, not all states responded to every survey question. Appendix I contains a more detailed discussion of our objectives, scope, and methodology. We administered this survey from mid-June to mid-September 2007. In addition, Andrew Von Ah (Assistant Director), Jay Cherlow, Steve Cohen, Greg Dybalski, Colin Fallon, Brandon Haller, Bert Japikse, Stuart Kaufman, Bonnie Pignatiello Leer, Jennifer Mills, Josh Ormond, Minette Richardson, and Ryan Vaughan made key contributions to this report. | Pressure on state and local governments to deliver highway projects and services, and limits on the ability of state departments of transportation (state DOT) to increase staff levels have led those departments to contract out a variety of highway activities to the private sector. As requested, this report addresses (1) recent trends in the contracting of state highway activities, (2) factors that influence state highway departments' contracting decisions, (3) how state highway departments ensure the protection of the public interest when work is contracted out, and (4) the Federal Highway Administrations' (FHWA) role in ensuring that states protect the public interest. To complete this work, GAO reviewed federal guidelines, state auditor reports, and other relevant literature; conducted a 50-state survey; and interviewed officials from 10 selected state highway departments, industry officials, and FHWA officials. State DOTs have increased the amount and type of highway activities they contract out to consultants and contractors. State DOTs are also giving consultants and contractors more responsibility for ensuring quality in highway projects, including using consultants to perform construction engineering and inspection activities as well as quality assurance activities. Many state officials reported that they expect the amount of contracted highway activities to level off over the next 5 years, due to factors such as uncertain highway program funding levels. State DOTs indicated that the most important factor in their decision to contract out highway activities is the need to access the manpower and expertise necessary to ensure the timely delivery of their highway program, given in-house resource constraints. Officials said that they must contract out work to keep up with their highway programs. Of the 50 departments that completed GAO's survey, 38 indicated that they have experienced constant or declining staffing levels over the past 5 years. While state DOTs consider cost issues when making contracting decisions, cost savings are rarely the deciding factor in contracting decisions, and none of the 10 departments that GAO interviewed had a formal process in place for systematically assessing costs and benefits before entering into contracts. State DOT officials that GAO interviewed believe that they have sufficient tools and procedures in place to select, monitor, and oversee contractors to ensure that the public interest is protected. However, implementation of these mechanisms is not consistent across states, and state auditors reported weaknesses in several states. State DOTs also face additional challenges in conducting adequate oversight and monitoring, given current trends in the use of consultants and contractors. For example, while state employees are always ultimately responsible for highway project acceptance, they are increasingly further removed from the day-to-day project oversight. Officials from all 10 state DOTs that GAO interviewed said that current trends may lead to an erosion of in-house expertise that could affect the state DOTs' ability to adequately oversee the work of contractors and consultants in the long term. Because states have broad latitude in implementing the federal-aid highway program, FHWA has a limited role in states' use of consultants and contractors. Typically, FHWA's focus is on ensuring that state DOTs are in compliance with federal regulations when contracting out, such as ensuring that federal bidding requirements are met. FHWA has conducted both local and national reviews that have also identified various risks related to the increased use of consultants, including weaknesses in state quality assurance programs and an increased potential for conflicts of interest. While FHWA has identified these risks, it has not comprehensively assessed how, if at all, it needs to adjust its oversight efforts to protect the public interest, given current trends in the use of consultants and contractors. |
Expansion of e-government was one of five top priorities in the President’s fiscal year 2002 management agenda for improving government performance. To support that priority, a task force, led by OMB, was established in 2001 and charged with identifying electronic government projects that could deliver significant productivity and performance gains across government. The task force analyzed the federal bureaucracy and identified areas of significant overlap and redundancy in how federal agencies provided services to the public. The task force found that multiple agencies were conducting redundant operations within 30 major functions and business lines in the executive branch. To address these redundancies, the task force evaluated potential projects, focusing on collaborative opportunities to integrate IT operations and simplify processes within lines of business across agencies and around citizen needs. As a result of this assessment, the task force identified a set of high-profile e-government initiatives for accelerated near-term implementation. These are now the 25 OMB- sponsored initiatives. The President’s management agenda outlined the following results expected as a result of e-government: provide high-quality customer services regardless of whether the citizen contacts the agency by phone, in person, or on the Web; reduce the expense and difficulty of doing business with the government; cut government operating costs; provide citizens with readier access to government services; increase access for persons with disabilities to agency Web sites and e-government applications; and make government more transparent and accountable. OMB also established a portfolio management structure to help oversee and guide the initiatives and facilitate a collaborative working environment for each of them. This structure includes five portfolios: “government to citizen,” “government to business,” “government to government,” “internal efficiency and effectiveness,” and “cross-cutting.” Each of the 25 initiatives is assigned to one of these portfolios, according to the type of results the initiative is intended to provide. Further, for each initiative, OMB designated a specific agency to be the initiative’s “managing partner,” responsible for leading the initiative, and assigned other federal agencies as “partners” in carrying out the initiative. OPM was designated the managing partner for five initiatives—Recruitment One-Stop, which is to provide a consolidated Web site for federal job applicants; e-Clearance, which seeks to improve the process of granting security clearances; Enterprise Human Resources Integration, which is to replace paper personnel files with electronic records; e-Training, which is to provide Internet-based training for federal employees; and e-Payroll, which seeks to consolidate federal payroll systems. The five initiatives are all part of the internal efficiency and effectiveness portfolio. In developing this testimony, our objectives were to describe the progress of the five e-government initiatives being managed by OPM and identify key challenges associated with implementing them successfully. To address these objectives, we analyzed relevant documentation from OPM and interviewed project officials from each of the initiatives. To assess progress to date and identify major challenges to implementing the initiatives, we analyzed the reported accomplishments and planned activities of the projects and compared them with information provided in the initiatives’ original business cases. We also held discussions with agency officials to obtain additional information. We performed our work in September 2003 in accordance with generally accepted government auditing standards. OPM’s e-government initiatives are intended to serve as a complete set of electronic support tools for the federal government’s human capital functions, including recruitment, security clearances, personnel records, training, and payroll. OPM’s retirement systems modernization project—not an OMB-sponsored initiative—rounds out this set of tools. OPM’s vision is for these initiatives to streamline and improve the process for moving employees through the entire life cycle of their employment with the federal government and to do so consistently with the evolving Federal Enterprise Architecture as well as with security and privacy standards. According to the agency, the success of the initiatives will depend on leveraging of existing IT coupled with standardization and consolidation practices that are beneficial to end users. If successful, these initiatives are likely to accrue savings to the federal government by reducing redundancy among agency systems and streamlining the various processes involved in tracking and managing federal employment. Although we have not evaluated its claim, OPM asserts that its e-government projects will save approximately $2.6 billion over the life of the initiatives. These savings are expected to derive not only from eliminating duplicative personnel systems, such as payroll systems, but also from such process improvements as reducing the amount of time it takes to obtain a security clearance and streamlining the way in which training is administered. Table 1 provides an overview of OPM’s e-government projects and key milestones, and table 2 provides a summary of changes in cost estimates for the initiatives. Recruitment One-Stop is a collaborative effort between OPM and its federal agency partners to develop a comprehensive Web site (www.usajobs.opm.gov) to assist applicants in finding employment with the federal government. Full implementation of Recruitment One-Stop is expected to benefit citizens by providing a more efficient process for locating and applying for federal jobs, and to assist federal agencies in hiring top talent in a competitive marketplace. As we have previously reported, automation has the potential to provide a variety of benefits in streamlining the hiring of new employees. The specific objectives of Recruitment One-Stop that will benefit federal job applicants include a single portal advertising federal job opportunities that supports searching for jobs by type, location, salary, or level of experience; a standard method for applying for federal positions that provides immediate feedback on basic eligibility; and basic eligibility screening that addresses issues such as citizenship, age, and special occupational requirements, such as the need to carry firearms; standardized vacancy announcements with additional detailed information available via electronic “hyperlinks”; tools to build and store an on-line resume, including a resume template covering all information normally needed to make basic qualifications and eligibility determinations; and the ability to check on the status of federal job applications by accessing basic information such as closing and/or cancellation dates, dates of candidate referral, and points of selection. In addition, agencies are expected to be able to search and review the resumes of consenting applicants in the USAJOBS database, a process called applicant database mining. This feature will assist agencies in locating candidates for hard-to-fill positions by capturing “passive” job seekers who have resumes on file, but who may not have thought of looking for opportunities within a particular agency, job field, or location. To date, the Recruitment One-Stop initiative has met several planned milestones, including implementing enhancements to the previously existing www.usajobs.opm.gov Web site in August 2003, such as a resume builder to assist job applicants in developing up to five versions of their resume with which to apply for federal jobs, and a basic application status tracking tool to assist applicants in finding the status of their federal applications. By the end of this month, OPM plans to have all executive branch agencies using the Web site to advertise their jobs. By December 2003, it intends to begin working with agencies to shut down agency-unique job search engines and resume builders. OPM has continued development of the enhanced USAJOBS Web site despite a successful bid protest against its contract award for implementing the enhancements. On January 16, 2003, OPM awarded a contract to TMP Worldwide, Inc., to support enhancements to the Web site. However, on January 24, 2003, a competing vendor, Symplicity Corporation, protested the award. We sustained Symplicity’s protest on April 29, 2003, based on a determination that OPM did not exercise certain necessary evaluative controls in its review of the bids before awarding the contract, resulting in errors in the bidding process that created an unfair competitive environment. For example, we found that OPM did not perform an analysis of whether the quoted services, labor categories, and other direct costs included in TMP’s quotation were within the scope of TMP’s approved GSA contract schedule. Based on this finding, we recommended that OPM reopen discussions with all vendors whose quotations were competitive and request and reevaluate revised quotations. However, on July 21, 2003, OPM informed us that it would not reopen discussions with vendors, citing as one of its reasons the need to complete the system “within the government’s required time frame.” On August 5, 2003, we submitted a report to Congress summarizing the protest decisions and the circumstances of the failure of OPM to implement our recommendation. OPM is planning to measure the performance of the enhanced Web site and features with metrics such as cost per hire, time to fill vacancies, and the percentage of federal job applicants using Recruitment One-Stop. OPM expects that once Recruitment One- Stop is fully implemented, it will generate a total of $365 million in savings through fiscal year 2012. According to project officials, the expected cost savings were extrapolated from projected average annual decreases in the cost of hiring each new federal employee. By fiscal year 2005, OPM’s goal is to reduce the cost per hire from $2,790 to $2,678, reduce the time to fill job vacancies from 102 days to 97, and increase the percentage of job applicants using Recruitment One-Stop from 80 to 84 percent. The e-Clearance project is designed to improve processing of security clearances for federal employees. It focuses on consolidating and increasing access to information to improve the efficiency of granting or locating previous clearances or investigations. OPM intends the e-Clearance project to help streamline data collection and case scheduling by making it easier to locate existing investigations and clearances, providing for almost immediate retrieval of archived records as they are needed. The expected benefits include quicker granting of clearances, elimination of redundant investigations, and financial savings from a reduction in the overall costs of clearances. The initiative consists of three modules: Electronic Questionnaires for Investigations Processing involves the automation of the Questionnaire for National Security Positions (Standard Form 86). This paper-based form requires at least 2 hours to complete, and some federal employees are required to fill it out as often as every few months to maintain their security clearances. Since the current form is processed manually, it must be completed each time from scratch. In contrast, the electronic version of the form will be populated with previously submitted data, thereby streamlining the application process. In addition, a new form has been deployed that allows federal employees to indicate that there have been no changes in the data provided on the most recently filed Standard Form 86, or, where there are changes, to provide only the newly changed information. Clearance Verification System consists of the development and implementation of a cross-agency system to enable a single search to locate investigative and clearance information from any agency. This module requires civilian agencies to load their existing clearance information into OPM’s Security/Suitability Investigations Index so that new clearance applications can be checked against existing information. The module also includes developing a link to the Department of Defense’s Joint Personnel Adjudication System to access comparable DOD information. Imaging includes the creation, storage, and retrieval of digital images of investigative reports and other documents. Often, the longest delay in an investigation can be the retrieval, copying, and mailing of previous reports. The use of imaging is intended to ease retrieval and dissemination of investigative information for authorized users. Currently, OPM states that all major milestones for this initiative have been met, including activating Electronic Questionnaires for Investigations Processing in completing the Clearance Verification System connection between OPM and DOD at the end of 2002, and having 80 percent of agencies load their existing clearance information into the Clearance Verification System at the end of January 2003; and beginning the process of creating digital images of existing investigative records by May 2003. The requirements for Imaging were developed between the fall of 2002 and the summer of 2003, and some agencies have begun imaging while others will phase in this capability. Additionally, OPM plans to implement a secure network for exchanging imaged files by early 2004. OPM plans several performance improvements for fiscal year 2005, including reducing the average time to process clearance forms electronically from 28 to 21 days, adding three additional forms to the one now available in the Electronic Questionnaires for Investigations Processing application, keeping unscheduled application downtime at no more than 2 percent, and providing training to all e-Clearance staff. OPM estimates that e-Clearance will realize savings of $258 million through fiscal year 2012. These savings are to be realized through avoiding agency-unique systems procurements and through a $50 reduction in the average cost of each clearance investigation. The purpose of the Enterprise Human Resources Integration (EHRI) initiative is to facilitate human capital management activities by providing storage, access, and exchange of standard electronic information, through development of a data repository of standardized core human capital data for all 1.8 million executive branch employees. These data will be in the form of an Official Electronic Record, which is intended to replace the current paper- based Official Personnel File. An Official Electronic Record for each employee is to be maintained through electronic exchange of information among agencies throughout an employee’s government career. Because all EHRI information exchanges will be electronic, OPM expects to reduce process cycle times, and improve the accuracy of transactions. The three primary goals of EHRI are to provide for comprehensive knowledge management and workforce analysis, forecasting, and reporting to further strategic management of human capital across the executive branch; enable expanded electronic exchange of standardized human resources data within and across agencies and systems and the attainment of associated benefits and cost savings; and provide unification and consistency in human capital data across the executive branch. OPM plans to implement EHRI in three releases. Release 1 will be a prototype of the data repository system and is scheduled to be ready by September 30, 2003. Release 2, scheduled for the second quarter of fiscal year 2004, is intended to allow biweekly employee data to be provided electronically. Release 3 is planned to incorporate interfaces between OPM’s Retirement System Modernization system and e-Training and to allow two-way electronic transfer of personnel data between agencies. The system functionality is to be incrementally available between March and September 2004. OPM plans to improve performance for fiscal year 2004 in areas such as personnel management, savings and cost avoidance, and data reliability and quality. For example, it intends to alter regulations, executive orders, and laws to enable the conversion of records to an electronic format. Another planned measure involves eliminating the need for agencies to develop new human capital management capabilities. Other measures include a reduction in the baseline data error rate and participation by 9 of the 18 partner agencies in electronic workforce forecasting. OPM reported that EHRI would save taxpayers around $235 million through fiscal year 2012. The purpose of the e-Training initiative is to create a government- wide e-Training environment—the Gov On-line Learning Center (www.golearn.gov)—which is to support the development of the federal workforce and provide a single source for on-line training and strategic human capital development for all federal employees. The Gov On-line Learning Center is designed to provide users access to a broad range of products and services, including mandatory government-wide training on topics such as computer security, ethics, and preventing sexual harassment, as well as agency-specific training and high-interest topics, such as homeland security. Some of the courses are to be free, while others are to be available on a fee-for-service basis. OPM also plans for the GoLearn Web site to provide tools that will allow human capital specialists and employees to match an employee’s professional and individual development to available courses and services. OPM expects that its initiative will allow agencies to focus their own training efforts on unique needs, thus maximizing the effectiveness of their expenditures on workforce performance. Providing agencies with on-demand e-learning services is expected to enable the government to better attract, retain, manage, and continuously educate the highly skilled professionals needed for a flexible and high-performing government workforce. The e-Training initiative is intended to benefit the government and federal workforce by encouraging e-training investments as part of a systematic and continuous development of federal government human capital; reducing redundancies in the development and purchase of e-training content; achieving economies of scale through consolidated purchasing; offering easy, one-stop access to a robust, high quality e-training leveraging components of existing e-training systems. The e-Training Initiative is composed of three developmental modules. Module 1, launched in July 2002, included 37 free commercial off-the-shelf training courses (on topics such as project management, prevention of sexual harassment, using Microsoft Excel spreadsheet software, and change management); “Search and Select,” a set of quick 5 to 7 minute learning segments; and “E- Books,” a collection of over 100 professional journals and books. Module 2, launched in January 2003, added access to additional free commercial and government courses, fee-for-service options for e-training products and services, enhanced registration and reporting, and blended learning options for law enforcement training and executive and management training. Finally, Module 3, originally scheduled for June 2003, recently became operational. OPM states that this module will include a Merit Systems Principles e-course, as well as competency-based workforce development roadmaps specifically for IT workforce occupations. According to OPM, future releases of the Gov Online Learning Center will move from providing content to facilitation of learning. The agency also plans to introduce knowledge domains, user communities of practice, and virtual collaboration tools. OPM estimated that e-Training would save taxpayers about $784 million through 2012. These savings are expected from the lower cost associated with providing on-line training, such as savings resulting from less travel. OPM expects to have 30 customized Web sites in operation for fiscal year 2004. Its goal is to have 77,000 courses completed and 48 sites developed. It is planning to measure performance of the e-Training initiative through indicators such as the number of eligible GoLearn users, the number of participating entities, the number of GoLearn courses completed and the number of custom sites. For example, OPM plans to increase the number of individuals registered on the GoLearn site from 142,000 to 193,000. The goal of the e-Payroll initiative is to substantially improve federal payroll operations by standardizing them across all agencies, integrating them with other human resource functions, and making them easy to use and cost-effective. To achieve this goal, plans are to consolidate the operations of 22 existing federal payroll system providers, simplify and standardize federal payroll policies and procedures, and better integrate payroll with other human capital and finance functions across federal agencies. Major objectives of the initiative include (1) defining governance for the initiative, (2) standardizing payroll policies, (3) establishing an e-Payroll enterprise architecture, and (4) overseeing consolidation of agency payroll operations. The first major project deliverable— establishing governance—was completed in April 2002 as scheduled. OPM chose four agencies to be providers of payroll services to all 116 executive branch agencies. The four selected providers are the General Services Administration (GSA) and the Departments of Defense, Interior, and Agriculture. The initiative is divided into two major phases: migrating each of the 18 nonselected payroll system providers to one of the four selected providers by September 2004, and merging the functions of the four selected payroll providers into just two, while working to develop a single, integrated payroll system for all executive branch agencies. Although providers have been selected and a migration schedule established for nonselected agencies, other actions have been delayed. Standardization of policies, originally scheduled for completion in July 2002, is currently ongoing. The enterprise architecture planning task and the initial phase of agency consolidations were both scheduled to begin in October 2002 but were not initiated until January 2003. According to the project manager, these schedule deviations have not led to a significant delay in the overall progress of the initiative toward the original goal of consolidating the 22 payroll providers to 4 by September 2004. OPM reported that e-Payroll should save $1.1 billion through fiscal year 2012. These savings would result from reducing operating costs, eliminating duplicative systems investments, and simplifying payroll processing. OPM plans to use several indicators to measure performance and improvements regarding e-Payroll for fiscal year 2005, including reductions in payroll costs per W-2 per employee, improvements in the accuracy of Treasury disbursements, and reductions in response time. Currently, the cost of payroll services per W-2 per employee can vary from $125 to $225. OPM’s plan is to lower these costs to $97. Other planned improvements include increasing the accuracy of Treasury’s disbursements from 98 percent to 100 percent and reducing the cycle time involved in delivering payroll services. OPM’s portfolio of e-gov initiatives represents an ambitious attempt to transform the way human capital functions and services are carried out in the federal government. In implementing the initiatives, OPM faces a number of challenges that, if not fully met, could erode support for the initiatives or prevent OPM from meeting its objectives and realizing the improvements and dollar savings that the agency has projected. We have commented in the past on the many challenges facing e-government projects in general. Today, I’d like to concentrate on three challenges that are especially pressing for OPM’s efforts. These include (1) managing accelerated acquisitions, (2) achieving governmentwide consolidation of common electronic functions, and (3) estimating and measuring cost savings. Program managers for many of the 25 OMB-sponsored e-government initiatives have been under pressure, both from OMB and within managing partner agencies, to achieve results quickly. One of the criteria for OMB’s selection of its e-government initiatives was the potential for the project to be completed “within 18–24 months.” In order to meet the demand for quick results, significant alterations have been made to the acquisition plans for several initiatives. For example, in the case of the e-Authentication initiative, which is focused on developing a centralized gateway for electronic authentication in support of the other OMB-sponsored initiatives, a decision was made to compress to approximately 3 months the competitive contracting process, which had originally been planned to take place over a full year. The major factor in this decision was the perceived need to make the planned gateway fully operational as soon as possible. However, this accelerated schedule may be difficult to achieve because it is based on an extremely short time frame, in which the selected contractor must develop, test, and deploy a fully operational gateway. In the case of the Geospatial One-Stop initiative, which aims to establish a Web portal for locating and disseminating geospatial information, the initiative’s board of directors decided in early 2003 to make an award based on an unsolicited proposal rather than continue a competitive procurement, largely because of a perceived need to implement the Web portal as quickly as possible. The change in acquisition plans has caused concern among many in the geospatial information systems community that the contractor’s proprietary approach to developing the Web portal could make it difficult for many potential contributors to share data with the portal—which would prevent the initiative from meeting its goal of providing one-stop access to geospatial data. OPM has likewise taken a controversial step with its recent Recruitment One-Stop acquisition. In its decision to continue with its awarded contract for Recruitment One-Stop, despite a successful bid protest by Symplicity Corporation, OPM officials perceived the need to implement an e-government initiative as quickly as possible to be one factor outweighing the importance of issues that we raised concerning the conduct of the procurement. In its letter to us explaining why it intended to proceed without implementing our recommendation, OPM made clear that it was concerned about implementing Recruitment One-Stop quickly: “The [Recruitment One Stop] program is ready to become operational. National security demands and critical domestic needs underlie the Government’s vital need for efficient recruitment and hiring methods. No other contractor can complete the system within the Government’s required timeframe.” However, in our report to Congress, we noted that OPM did not provide any details to support this claim. While it is important to adhere to agreed-upon schedules and milestones, it is also important to follow established contracting procedures, which are intended to ensure fair competition. A rapidly chosen vendor may not represent the best value for the government’s investment. By leaving questions about the fairness of the Recruitment One-Stop competition unresolved, OPM risks alienating potential supporters of its e-government initiative. In order to ensure the ultimate success of its initiatives, it is important that OPM—as well as the other managing partners of OMB-sponsored initiatives—carefully weigh the risks and benefits of making significant changes to its planned acquisitions solely based on the need to produce results quickly. Each of OPM’s five initiatives aims to ultimately create a single system or Web-based service to support a specific human capital function across the federal government. In each case, agency-unique systems and processes must be either replaced or integrated into the planned single system. Clearly, one of OPM’s biggest challenges is managing the process of migrating agency-unique systems into consolidated systems and services that are flexible enough to effectively support the needs of virtually all federal agencies. Many current federal human capital systems are based on proprietary systems that were originally developed for the narrowly defined needs of a single agency or bureau. These systems were not designed to be interoperable with external systems, and generally were not built to government standards (which have not yet been defined for many human capital functions). The development of systems based on narrowly defined needs, combined with traditional barriers to interorganizational cooperation, have led to the current environment of duplicative, inefficient, nonintegrated (“siloed”) operations. One way to encourage interagency cooperation on multiple systems migrations is to develop a concerted strategy for managing change and communicating effectively with all affected parties. In June 2002, OPM submitted to OMB its change management and communication plan, which specified steps that OPM planned to take in managing change and communications. In implementing its plan, OPM established change management councils and delivered presentations on its plans for specific initiatives, as well as for governmentwide integration of human capital functions, to a range of audiences, including high-level officials (such as the e-Government committee of the President’s Management Council and the Chief Human Capital Officers Council) and line managers (such as human resource managers). Effective change management and communication will be critical, as agencies may be required to take positive action to both to shut down existing redundant systems and invest in new technology to connect with OPM’s standardized systems. OPM is planning for agencies to shut down a number of agency-unique systems and applications. For example, the e-Payroll initiative is set to reduce federal payroll providers from the current 22 to just two partnerships of two providers each. Nonselected payroll providers will be required to shut down operations. Another example is the Recruitment One-Stop initiative, which envisions that agency on-line resume building and job search engine capabilities will be shut down in favor of OPM’s centralized system. The e-Training initiative also plans for agencies to shut down their unique systems in favor of OPM’s offering. Consolidation may also mean that agencies must make new investments in order to connect with a new, integrated system. The e-Clearance initiative, for example, requires all agencies with archives of clearance investigations to make those materials available electronically, thus necessitating agency expenses for new imaging equipment. Likewise, EHRI will require agencies to make modifications to their systems allowing electronic personnel records to be transmitted to OPM’s central repository. Getting cooperation from all affected agencies in making these investments will be challenging. OMB’s support is a critical factor in facilitating these consolidations. For several e-government initiatives, OMB has used its statutory authority under the Clinger-Cohen Act of 1996 to direct agencies to identify and halt funding of potentially redundant IT investments. For example, OMB issued on January 10, 2003, a letter to federal agencies directing them to halt spending on agency-specific payroll modernization efforts not associated with migrating to the e-Payroll initiative. A similar letter had been issued in April 2002 directing agencies to load their security clearance information into e-Clearance’s Clearance Verification System. Beyond issues of organizational cooperation, technical integration can also be very challenging. Developing a common set of standards that are agreed to and used by all project partners is a key factor for integrating disparate, noninteroperable systems and services. Ensuring that processes are in place by which partners can select and agree upon standards and that all partners are adopting them are key factors in successfully establishing standards. Finally, standardization within the framework of the emerging Federal Enterprise Architecture is key to promoting compliant development and implementation across the government. OPM officials said they plan to use the Federal Enterprise Architecture to document specific data requirements for the human capital functions supported by their e-government initiatives. OPM has taken steps to involve its partners and other federal agencies in the process of identifying opportunities for standardization on the e-Payroll initiative. However, it still faces the challenging task of getting federal agencies to reach agreement on a single payroll standard that they all can use. As agencies migrate ultimately to this single standard, changes may need to be made either to provider payroll processes and standards—so that the various payroll mandates can be accommodated—or to the mandated requirements themselves, so that agencies can conform to the single-payroll standard. For example, the Department of Veterans Affairs’ Acting Deputy Assistant Secretary for Finance expressed concern that administering payroll systems under Title 38 of the United States Code—the legislation that governs the agency’s payroll processes—was very complex, and that significant changes in payroll processing could be necessary as the agency migrates to its new payroll provider. According to an OPM study, in addition to Title 38, there are at least 13 other sets of legislated federal payroll provisions that will need to be reviewed and addressed before a single federal payroll system can be implemented. Without agreement on standards, changes mandated by OPM may not fully address agencies’ individual payroll processing requirements, increasing the risk that agencies will not be able to migrate as planned to the chosen governmentwide standard. OPM may face similar challenges in establishing standards for official electronic personnel records, as part of EHRI. OPM officials conducted an exercise to identify all the various types of data captured by federal personnel forms. OPM officials identified 89 major data categories, with over 500 data elements. OPM officials recognize the challenge they face in seeking agreement across federal agencies on standardizing these data elements, a process which is still in its early stages. While it is relatively easy to develop and implement Web sites that facilitate exchange of information—as some of OPM’s initiatives do—the agency can expect greater challenges in getting cooperation across the government to consolidate functions by shutting down redundant systems, investing in new technologies, and committing to new governmentwide standards. For several of OPM’s initiatives—including e-Payroll and EHRI—much of this process still remains to be completed. One of the goals of OMB’s e-government strategy includes achieving cost savings as an outcome of implementing the 25 e-government initiatives. For example, in its 2002 strategy OMB estimated that these initiatives could generate several billion dollars in savings by reducing operating inefficiencies, redundant spending, and excessive paperwork, and it also estimated that the initiatives would make available over $1 billion in savings from realigning redundant investments. In addition, OMB has stated that the initiatives were selected for inclusion in the e-government strategy because they provided the most value to citizens while generating cost savings or improving the effectiveness of the government. OPM has estimated substantial cost savings that officials believe can be attributed to the e-government initiatives. Specifically, the agency estimates that the total savings expected from all five of its e-government initiatives will be more than $2.6 billion through fiscal year 2012. Such savings would be realized through performance enhancements that could reduce expenses such as costs per application for security clearances, costs per transaction for payroll processing, and costs associated with hiring new federal employees. Table 3 provides an overview of the cost savings estimated by OPM for its initiatives. OPM faces a significant challenge in realistically estimating the financial savings to be derived from its e-government initiatives. In many cases, estimated cost savings associated with process improvements are only loosely based on measures that are inherently abstract, such as the average cost of performing a certain function across the government. For example, e-Training project officials estimate that federal agencies can reduce training costs substantially by substituting electronic courses taken through e-Training—which cost approximately $10 to $15 per training instance—for traditional courses, which cost on average $150 per training instance, including travel. However, it is unclear the extent to which this kind of substitution will actually take place, or how it could lead to savings of $784 million through 2012, as forecast by OPM. The e-Training project manager told us that the estimate was based on cost avoidance for training tuition, travel, and economies of scale in acquiring training software licenses. Similarly, for the Recruitment One-Stop initiative, project officials predict that implementation will lead to a reduction in the average cost of hiring a new federal employee of $112 in fiscal year 2005— from $2,790 to $2,678, or about 4 percent. With about 150,000 new federal hires each year, the total savings through 2012 would amount to about $168 million—significantly less than the total cost savings of $365 million over that period that OPM estimates. According to OPM officials, the additional savings would be gained through other factors contributing to future efficiencies, although specific performance measures had not yet been established. OPM’s method for projecting cost savings due to process improvements may overstate the savings that can be reasonably attributed to those improvements. Specifically, officials stated that for at least one initiative, Recruitment One-Stop, estimated savings included continuing annual efficiency gains due to such things as expected “policy improvements” that would not be a direct result of implementing the Recruitment One-Stop initiative. Further, OPM has not developed mechanisms to track actual training expenditures at agencies to determine whether its estimated governmentwide savings are being realized. With estimated savings based on abstract, average governmentwide costs, it will likely be very difficult to develop methods for documenting the savings associated with process streamlining at each agency across the federal government. In another example, e-Payroll is planned to reduce the number of federal payroll service providers from 22 to 4, and then consolidate those 4 to 2. Clearly, cost savings can be found by reducing the number of payroll systems operated and maintained by the federal government and avoiding the costs of updating or modernizing those systems. However, OPM has not clearly identified all the factors that would contribute to such savings, or what measures will be used to measure them. Cost savings from eliminating redundant systems is also a factor—though a smaller one—in savings projected for Recruitment One-Stop and e-Training. Effectively measuring e-government cost savings is a challenge for all agencies engaged in efforts to streamline or transform government processes through e-government. To be truly effective in meeting the goals set out in OMB’s e-government strategy, agencies need to establish complete, meaningful, and quantitative measures of cost savings. Until such measures can be implemented, predicted cost savings will be difficult to confirm. In summary, OPM has made progress in moving forward with its five e-government initiatives, which, if fully implemented, could have significant benefits by providing more streamlined and seamless federal personnel processes, and by saving the taxpayers millions through eliminating redundant payroll and other systems. However, OPM continues to face several challenges in implementing and carrying out its e-government initiatives, including managing acquisitions while working to meet accelerated e-government project schedules; consolidating common, governmentwide human resource-related functions; and realistically estimating and measuring the cost savings that can be expected from these initiatives. Mr. Chairman, this concludes my statement. I would be pleased to answer any questions that you or other members of the subcommittee may have at this time. If you should have any questions about this testimony, please contact me at (202) 512-6240 or via E-mail at koontzl@gao.gov. Other major contributors to this testimony included Barbara Collier, Felipe Colón, Jr., Larry Crosland, John de Ferrari, and Elizabeth Roach. Internet Cigarette Sales: Limited Compliance and Enforcement of the Jenkins Act Result in Loss of State Tax Revenue. GAO-03-714T. Washington, D.C.: May 1, 2003. Electronic Procurement: Business Strategy Needed for GSA’s Advantage System. GAO-03-328. Washington, D.C.: February 19, 2002. Internet Gambling: An Overview of the Issues. GAO-03-89. Washington, D.C.: December 2, 2002. International Electronic Commerce: Definitions and Policy Implications. GAO-02-404. Washington, D.C.: March 1, 2002. Electronic Commerce: Small Business Participation in Selected On-line Procurement Programs. GAO-02-1. Washington, D.C.: October 29, 2001. On-Line Trading: Investor Protections Have Improved but Continued Attention Is Needed. GAO-01-858. Washington, D.C.: July 20, 2001. Internet Pharmacies: Adding Disclosure Requirements Would Aid State and Federal Oversight. GAO-01-69. Washington, D.C.: October 19, 2000. Sales Taxes: Electronic Commerce Growth Presents Challenges; Revenue Losses Are Uncertain. GGD/OCE-00-165. Washington, D.C.: June 30, 2000. Commodity Exchange Act: Issues Related to the Regulation of Electronic Trading Systems. GGD-00-99. Washington, D.C.: May 5, 2000. Trade with the European Union: Recent Trends and Electronic Commerce Issues. GAO/T-NSIAD-00-46. Washington, D.C.: October 13, 1999. Electronic Banking: Enhancing Federal Oversight of Internet Banking Activities. GAO/T-GGD-99-152. Washington, D.C.: August 3, 1999. Electronic Banking: Enhancing Federal Oversight of Internet Banking Activities. GAO/GGD-99-91. Washington, D.C.: July 6, 1999. Securities Fraud: The Internet Poses Challenges to Regulators and Investors. GAO/T-GGD-99-34. Washington, D.C.: March 22, 1999. Retail Payments Issues: Experience with Electronic Check Presentment. GAO/GGD-98-145. Washington, D.C.: July 14, 1998. Identity Fraud: Information on Prevalence, Cost, and Internet Impact is Limited. GAO/GGD-98-100BR. Washington, D.C.: May 1, 1998. Electronic Banking: Experiences Reported by Banks in Implementing On-line Banking. GAO/GGD-98-34. Washington, D.C.: January 15, 1998. IRS’s 2002 Tax Filing Season: Returns and Refunds Processed Smoothly; Quality of Assistance Improved. GAO-03-314. Washington, D.C.: December 20, 2002. Tax Administration: Electronic Filing’s Past and Future Impact on Processing Costs Dependent on Several Factors. GAO-02-205. Washington, D.C.: January 10, 2002. GSA On-Line Procurement Programs Lack Documentation and Reliability Testing. GAO-02-229R. Washington, D.C.: December 21, 2001. U.S. Postal Service: Update on E-Commerce Activities and Privacy Protections. GAO-02-79. Washington, D.C.: December 21, 2001. Computer-Based Patient Records: Better Planning and Oversight By VA, DOD, and IHS Would Enhance Health Data Sharing. GAO- 01-459. Washington, D.C.: April 30, 2001. USDA Electronic Filing: Progress Made, But Central Leadership and Comprehensive Implementation Plan Needed. GAO-01-324. Washington, D.C.: February 28, 2001. U.S. Postal Service: Postal Activities and Laws Related to Electronic Commerce. GAO/GGD-00-188. Washington, D.C.: September 7, 2000. U.S. Postal Service: Electronic Commerce Activities and Legal Matters. GAO/T-GGD-00-195. Washington, D.C.: September 7, 2000. Defense Management: Electronic Commerce Implementation Strategy Can Be Improved. GAO/NSIAD-00-108. Washington, D.C.: July 18, 2000. Food Stamp Program: Better Use of Electronic Data Could Result in Disqualifying More Recipients Who Traffic Benefits. GAO/RCED-00-61. Washington, D.C.: March 7, 2000. National Archives: The Challenge of Electronic Records Management. GAO/T-GGD-00-24. Washington, D.C.: October 20, 1999. National Archives: Preserving Electronic Records in an Era of Rapidly Changing Technology. GAO/GGD-99-94. Washington, D.C.: July 19, 1999. Geographic Information Systems: Challenges to Effective Data Sharing. GAO-03-874T. Washington, D.C.: June 10, 2003. Electronic Government: Success of the Office of Management and Budget’s 25 Initiatives Depends on Effective Management and Oversight. GAO-03-495T. Washington, D.C.: March 13, 2003. Electronic Government: Selection and Implementation of the Office of Management and Budget’s 24 Initiatives. GAO-03-229. Washington, D.C.: November 22, 2002. Electronic Government: Proposal Addresses Critical Challenges. GAO-02-1083T. Washington, D.C.: September 18, 2002. Information Management: Update on Implementation of the 1996 Electronic Freedom of Information Act Amendments. GAO-02-493. Washington, D.C.: August 30, 2002. Information Technology: OMB Leadership Critical to Making Needed Enterprise Architecture and E-government Progress. GAO- 02-389T. Washington, D.C.: March 21, 2002. Electronic Government: Challenges to Effective Adoption of the Extensible Markup Language. GAO-02-327. Washington, D.C.: April 5, 2002. Information Resources Management: Comprehensive Strategic Plan Needed to Address Mounting Challenges. GAO-02-292. Washington, D.C.: February 22, 2002. Elections: Perspectives on Activities and Challenges Across the Nation. GAO-02-3. Washington, D.C.: October 15, 2001. Electronic Government: Better Information Needed on Agencies’ Implementation of the Government Paperwork Elimination Act. GAO-01-1100. Washington, D.C.: September 28, 2001. Electronic Government: Challenges Must Be Addressed With Effective Leadership and Management. GAO-01-959T. Washington, D.C.: July 11, 2001. Electronic Government: Selected Agency Plans for Implementing the Government Paperwork Elimination Act. GAO-01-861T. Washington, D.C.: June 21, 2001. Information Management: Electronic Dissemination of Government Publications. GAO-01-428. Washington, D.C.: March 30, 2001. Information Management: Progress in Implementing the 1996 Electronic Freedom of Information Act Amendments. GAO-01-378. Washington, D.C.: March 16, 2001. Regulatory Management: Communication About Technology- Based Innovations Can Be Improved. GAO-01-232. Washington, D.C.: February 12, 2001. Electronic Government: Opportunities and Challenges Facing the FirstGov Web Gateway. GAO-01-87T. Washington, D.C.: October 2, 2000. Electronic Government: Government Paperwork Elimination Act Presents Challenges for Agencies. GAO/AIMD-00-282. Washington, D.C.: September 15, 2000. Internet: Federal Web-based Complaint Handling. GAO/AIMD-00- 238R. Washington, D.C.: July 7, 2000. Federal Rulemaking: Agencies’ Use of Information Technology to Facilitate Public Participation. GAO/GGD-00-135R. Washington, D.C.: June 30, 2000. Electronic Government: Federal Initiatives Are Evolving Rapidly But They Face Significant Challenges. GAO/T-AIMD/GGD-00-179. Washington, D.C.: May 22, 2000. Information Technology: Comments on Proposed OMB Guidance for Implementing the Government Paperwork Elimination Act. GAO/AIMD-99-228R. Washington, D.C.: July 2, 1999. Bank Regulators’ Evaluation of Electronic Signature Systems. GAO-01-129R. Washington, D.C.: November 8, 2000. Electronic Signature: Sanction of the Department of State’s System. GAO/AIMD-00-227R. Washington, D.C.: July 10, 2000. Internet Management: Limited Progress on Privatization Project Makes Outcome Uncertain. GAO-02-805T. Washington, D.C.: June 12, 2002. Telecommunications: Characteristics and Competitiveness of the Internet Backbone Market. GAO-02-16. Washington, D.C.: October 16, 2001. Telecommunications: Characteristics and Choices of Internet Users. GAO-01-345. Washington, D.C.: February 16, 2001. Telecommunications: Technological and Regulatory Factors Affecting Consumer Choice of Internet Providers. GAO-01-93. Washington, D.C.: October 12, 2000. Department of Commerce: Relationship with the Internet Corporation for Assigned Names and Numbers. GAO/OGC-00-33R. Washington, D.C.: July 7, 2000. Internet Privacy: Implementation of Federal Guidance for Agency Use of “Cookies.” GAO-01-424. Washington, D.C.: April 27, 2001. Record Linkage and Privacy: Issues in Creating New Federal Research and Statistical Information. GAO-01-126SP. Washington, D.C.: April 2001. Internet Privacy: Federal Agency Use of Cookies. GAO-01-147R. Washington, D.C.: October 20, 2000. Internet Privacy: Comparison of Federal Agency Practices with FTC’s Fair Information Principles. GAO-01-113T, Washington, D.C.: October 11, 2000. Internet Privacy: Comparison of Federal Agency Practices with FTC’s Fair Information Principles. GAO/AIMD-00-296R. Washington, D.C.: September 11, 2000. Internet Privacy: Agencies’ Efforts to Implement OMB’s Privacy Policy. GAO/GGD-00-191. Washington, D.C.: September 5, 2000. Social Security Numbers: Subcommittee Questions Concerning the Use of the Number for Purposes Not Related to Social Security. GAO/HEHS/AIMD-00-253R. Washington, D.C.: July 7, 2000. Electronic Government: Challenges to the Adoption of Smart Card Technology. GAO-03-1108T. Washington, D.C.: September 9, 2003. Electronic Government: Progress in Promoting Adoption of Smart Card Technology. GAO-03-144. Washington, D.C.: January 3, 2003. Computer Security: Weaknesses Continue to Place Critical Federal Operations and Assets at Risk. GAO-01-600T. Washington, D.C.: April 5, 2001. Information Security: Advances and Remaining Challenges to Adoption of Public Key Infrastructure Technology. GAO-01-277. Washington, D.C.: February 26, 2001. Information Security: IRS Electronic Filing Systems. GAO-01-306. Washington, D.C.: February 16, 2001. | Electronic government (e-government) refers to the use of information technology (IT), including Web-based Internet applications, to enhance access to and delivery of government information and services, as well as to improve the internal efficiency and effectiveness of the federal government. The Office of Personnel Management (OPM) is managing five e-government initiatives whose goal is to transform the way OPM oversees the government's human capital functions. These 5 initiatives are among 25 identified by the Office of Management and Budget (OMB) as foremost in the drive toward egovernment transformation. The 25 initiatives have ambitious goals, including eliminating redundant, nonintegrated business operations and systems and improving service to citizens by an order of magnitude. Achieving these results, according to OMB, could produce billions of dollars in savings from improved operational efficiency. In today's testimony, among other things, GAO identifies the challenges facing OPM as it moves forward in implementing the five human capital initiatives. OPM's five e-government initiatives are an ambitious attempt to transform the way human capital functions and services are carried out in the federal government. OPM faces several challenges that, if not fully met, could prevent it from meeting its objectives and realizing projected improvements and dollar savings. For instance, in order to meet a perceived need for quick results, alterations have been made to the acquisition plans for several of the 25 OMBsponsored e-government initiatives, including OPM's Recruitment One-Stop initiative. In OPM's recent decision to continue with its awarded contract for Recruitment One-Stop, despite a successful bid protest by Symplicity Corporation, agency officials perceived the need for quick results to be one factor outweighing the importance of issues raised by GAO concerning the conduct of the procurement. However, by taking this course, OPM risks alienating potential supporters of its initiative. Further, managing the migration from agency-specific systems to consolidated systems will be a challenge, because agencies may be required to take positive action to shut down existing systems and invest in additional or updated technology to use the new, consolidated systems resulting from OPM's five initiatives. Consequently, it will be crucial for OPM to implement effective change management and communication policies. In addition, technical integration across agencies to support consolidation, including the development of standards, is a formidable task. Finally, OPM also faces a significant challenge in realistically estimating the cost savings to be derived from these initiatives. In many cases, estimates of cost savings are only loosely based on measures that are difficult to quantify, such as the average cost of performing a certain function across the government. To be truly effective in meeting its goals, OPM needs to establish complete, meaningful, and quantitative measures of cost savings. |
The United Nations comprises six principal bodies, including the General Assembly and the Secretariat, as well as funds and programs, such as UNDP, and specialized agencies, such as UNESCO. These funds, programs, and specialized agencies have their own governing bodies and budgets, but follow the guidelines of the UN Charter. Article 101 of the UN Charter calls for staff to be recruited on the basis of “the highest standards of efficiency, competence, and integrity” as well as from “as wide a geographical basis as possible.” Each UN agency has developed its own human resource policies and practices, and staff rules. Of the five agencies we reviewed, three—the Secretariat, IAEA, and UNESCO—had quantitative formulas that establish targets for equitable geographical representation in designated professional positions. UNHCR had not established a quantitative formula or positions subject to geographic representation, but had agreed to an informal target for equitable U.S. representation. UNDP generally followed the principle of equitable geographic representation, but had not adopted formal or informal targets. Agencies with formal quantitative targets for equitable representation do not apply these targets to all professional positions. Instead, these organizations set aside positions that are subject to geographic representation from among the professional and senior positions performing core agency functions, funded from regular budget resources. Positions that are exempt from being counted geographically include linguist and peacekeeping positions, positions funded by extra- budgetary resources, and short-term positions. In addition, these organizations utilize various nonstaff positions, such as contractors and consultants. The Department of State is the U.S. agency primarily responsible for leading U.S. efforts toward achieving equitable U.S. employment representation in UN organizations. While State is responsible for promoting and seeking to increase U.S. representation in the UN, the UN entities themselves are ultimately responsible for hiring their employees and achieving equitable representation. U.S. citizens were underrepresented at three of the five UN agencies we reviewed: IAEA, UNESCO, and UNHCR. Given projected staff levels, retirements and separations for 2006-2010, these agencies need to hire more Americans than they have in recent years to meet their minimum targets for equitable U.S. representation in 2010. Relative to UN agencies’ formal or informal targets for equitable geographic representation, U.S. citizens were underrepresented at three of the five agencies we reviewed–IAEA, UNESCO, and UNHCR. U.S. citizens were equitably represented at the UN Secretariat, though at the lower end of its target range, while the fifth agency–UNDP–had not established a target for U.S. representation. U.S. citizens filled about 11 percent of UNDP’s professional positions. Table 1 provides information on U.S. representation at the five UN agencies as of 2005. Table 1 also shows that the percentage of U.S. citizens employed in nongeographic positions (or nonregular positions in the case of UNHCR and UNDP) was higher at IAEA, UNHCR, and UNDP and lower at the Secretariat and UNESCO compared to the percentage of geographic (or regular) positions held by U.S. citizens. As shown in table 2, U.S. citizen representation in geographic positions in “all grades” between 2001 and 2005 had been declining at UNHCR and displayed no clear trend at the other four UN agencies. U.S. representation in policy-making and senior-level positions increased at two agencies -–IAEA and UNDP—and displayed no overall trend at the Secretariat, UNESCO, and UNHCR over the full five years. At the Secretariat, although no trend was indicated, U.S. representation had been decreasing in policy-making and senior-level positions since 2002. At UNESCO, the data for 2001 to 2004 did not reflect a trend, but the overall percentage of Americans increased in 2005, reflecting increased recruiting efforts after the United States rejoined UNESCO in 2003. At UNHCR, the representation of U.S. citizens in these positions grew steadily from 2001 to 2004, but declined in 2005. We estimated that each of the four agencies with geographic targets–the Secretariat, IAEA, UNESCO, and UNHCR–would need to hire U.S. citizens in greater numbers than they had in recent years to achieve their minimum targets by 2010, given projected staff levels, retirements, and separations; otherwise, with the exception of UNESCO, U.S. geographic representation will decline further. As shown in table 3, IAEA and UNHCR would need to more than double their current average hiring rate to achieve targets for U.S. representation. The Secretariat could continue to meet its minimum geographic target for U.S. citizens if it increased its annual hiring of U.S. citizens from 20 to 23. UNESCO could achieve its minimum geographic target by increasing its current hiring average of 4.5 Americans to 6 Americans. Although the fifth agency, UNDP, did not have a target, it would have to increase its annual hiring average of U.S. citizens from 17.5 to 26 in order to maintain its current ratio of U.S. regular professional staff to total agency regular professional staff. If current hiring levels are maintained through 2010, two of the five agencies–IAEA and UNHCR–would fall substantially below their minimum targets. In only one agency–UNESCO–would the percentage of geographic positions filled by U.S. citizens increase under current hiring levels, due in part to the recent increased hiring of U.S. citizens. A combination of barriers, including some common factors as well as agency-specific factors, adversely affected recruitment and retention of professional staff, including Americans, at each of the five UN agencies. These barriers combined with distinct agency-specific factors to impede recruitment and retention. We identified the following six barriers that affected U.S. representation in the UN agencies we reviewed, though often to differing degrees: Nontransparent human resource practices. A key barrier to American representation across the five UN agencies was the lack of transparent human resource management practices, according to Americans employed at UN organizations. For example, some UN managers circumvented the competitive hiring process by employing individuals on short-term contracts—positions that were not vetted through the regular, competitive process—for long-term needs. Limited external opportunities. Recruiting U.S. candidates was difficult because agencies offered a limited number of posts to external candidates. Each of the organizations we reviewed, except IAEA, advertised professional vacancies to current employees before advertising them externally in order to provide career paths and motivation for their staff. We found that three of the five agencies—UNESCO, UNHCR, and UNDP—filled 50 percent or more of new appointments through promotions or with other internal candidates rather than by hiring external candidates. IAEA filled a large percentage of its positions with external candidates because, in addition to not giving internal candidates hiring preference, the agency employed the majority of its staff members for 7 years or less. Although the data indicated that the Secretariat hired a significant percentage of external candidates, the Secretariat’s definition of “external candidates” included staff on temporary contracts and individuals who had previous experience working at the agency. Lengthy hiring process. The agencies’ lengthy hiring processes can deter candidates from accepting UN employment. For example, a report from the Secretary General stated that the average hiring process was too slow, taking 174 days from the time a vacancy announcement was issued to the time a candidate was selected, causing some qualified applicants to accept jobs elsewhere. Many Americans that we interviewed concurred with the report, saying that it was difficult to plan a job move when there was a long delay between submitting an application and receiving an offer. In March 2006, the Secretary General proposed cutting the average recruitment time in half. Low or unclear compensation. Comparatively low salaries and benefits that were not clearly explained were among the most frequently mentioned deterrents to UN employment for Americans. American employees we interviewed noted that UN salaries, particularly for senior and technical posts, were not comparable with U.S. government and private sector salaries. When candidates consider UN salaries in tandem with UN employee benefits, such as possible reimbursement for U.S. taxes and school tuition allowances through college, UN compensation may be more attractive. However, U.S. citizens employed at IAEA and UNESCO said that their agency did not clearly explain the benefits, or explained them only after a candidate had accepted a position. Incomplete or late information hampered a candidate’s ability to decide in a timely manner whether a UN position was in his or her best interests. In addition, difficulty securing spousal employment can decrease family income and may also affect American recruitment since many U.S. families have two wage earners. At many overseas UN duty stations, work permits can be difficult to obtain, the local economy may offer few employment opportunities, and knowledge of the local language may be required. Required mobility or rotation. UNHCR and UNDP required their staff to change posts at least every 3 to 6 years with the expectation that staff serve the larger portion of their career in the field; the UN Secretariat and UNESCO were implementing similar policies. While IAEA did not require its employees to change posts, it generally only hired employees for 7 years or less. Such policies dissuaded some Americans from accepting or staying in a UN position because of the disruptions to personal or family life such frequent moves can cause. Limited U.S. government support. At four of the five agencies we reviewed—all except IAEA—a number of American employees said that they did not receive U.S. government support during their efforts to obtain a UN job or to be promoted at the job they held. The U.S. government supported candidates applying for director-level, or higher, posts, and put less emphasis on supporting candidates seeking lower-level professional posts. Although UN employees are international civil servants directly hired by UN agencies, some countries facilitate the recruitment of their nationals by referring qualified candidates, conducting recruitment missions, and sponsoring JPOs or Associate Experts. Distinct agency-specific factors also impeded recruitment and retention. For example, Candidates serving in professional positions funded by their member governments were more likely to be hired by the Secretariat than those who took the Secretariat’s entry-level exam; however, the United States had not funded such positions at the Secretariat. At the entry level, hiring for professional positions was limited to an average of 2 percent of individuals invited to take the Secretariat’s National Competitive Recruitment Exam. In contrast, the Secretariat hired an average of 65 percent of Associate Experts sponsored by their national government. Continuing U.S. underrepresentation at the IAEA was described by U.S. government officials as a “supply-side issue,” with the pool of American candidates with the necessary education and experience decreasing, as nuclear specialists are aging and few young people are entering the nuclear field. The United States’ 19-year withdrawal from UNESCO contributed to its underrepresentation. When the United States left UNESCO in 1984, Americans comprised 9.6 percent of the organization’s geographic professional staff. When it rejoined in 2003, Americans comprised only 2.9 percent. By 2005 that number had increased to 4.1 percent—the third largest group of nationals UNESCO employed, although still below the minimum geographic target. The difficult conditions that accompany much of UNHCR’s work, coupled with the requirement to change duty stations every 4 years, contributed to attrition at the mid-career levels. UNHCR’s requirement that employees change duty stations every 4 years was one of the most frequently cited barriers to retaining staff among the American employees we interviewed. UNHCR’s mission to safeguard the rights and well-being of refugees necessitates work in hardship and high-risk locations. As such, UNHCR has twice as many hardship duty stations as any other UN agency. Several barriers to increasing U.S. representation were the leading factors at UNDP and were also present at other UN agencies, according to American employees and other officials. In addition, UNDP’s Executive Board had traditionally managed the organization with the understanding that its staff be equally represented from northern (mostly developed) and southern (mostly developing) countries, and had recently focused on improving the north-south balance of staff at management levels by increasing the hiring of candidates from southern countries. State targeted its recruitment efforts for senior and policy-making UN positions, and, although it was difficult to directly link State’s efforts to UN hiring decisions, U.S. representation in these positions either improved or displayed no trend in the five UN agencies we reviewed. State also increased its efforts to improve overall U.S. representation; however, despite these efforts, U.S. representation in entry-level positions declined or did not reflect a trend in four of the five UN agencies. Additional options exist to target potential pools of candidates for these positions. State focused its recruiting efforts for U.S. citizen employment at UN agencies on senior-level and policy-making positions because of the influence that these positions have within the organization. Although it is difficult to directly link State’s efforts to UN hiring decisions, the percentage of U.S. representation in senior and policymaking positions either increased or did not display a trend at each of the five UN agencies we reviewed between 2001 and 2005. The U.S. share of senior and policymaking positions increased at IAEA and UNDP, whereas the U.S. share of these positions at the other three UN agencies displayed no trend over that period. Since 2001, State has devoted additional resources and undertaken several new initiatives in its role as the lead U.S. agency for supporting and promoting the employment of Americans in UN organizations. First, State increased resources for disseminating UN vacancy information. State increased the number of staff positions from two to five, and added a sixth person who worked part-time on UN employment issues. One of the new staff focused on recruiting Americans for senior-level positions at UN organizations. According to State, the other staff have been recruiting candidates for professional positions at career fairs and other venues; however, a large portion of their work has been focused on providing information to potential applicants and disseminating information on UN vacancies and opportunities. In addition, State has increased outreach for the Secretariat’s annual National Competitive Recruitment Exam for entry- level candidates by advertising it in selected newspapers. The number of Americans invited to take the exam increased from 40 in 2001 to 277 in 2004. State reported that 178 Americans in 2007 were invited to take the exam. Second, U.S. missions have shared U.S. representation reports and discussed openings with UN officials. State prepares annual reports to Congress that provide data on U.S. employment at UN agencies as well as State’s assessment of U.S. representation at selected UN organizations and these organizations’ efforts to hire more Americans. State is providing these reports to UN agencies, as we recommended in 2001. U.S. mission officials told us that they periodically meet with UN officials to discuss U.S. representation and upcoming vacancies. Finally, State has increased coordination with U.S. agencies. In 2003, State established an interagency task force to address the low representation of Americans in international organizations. Since then, task members have met annually to discuss U.S. employment issues. Task force participants told us that at these meetings, State officials reported on their outreach activities and encouraged agencies to promote the employment of Americans at UN organizations. One of the topics discussed by task force members was how to increase support for details and transfers of U.S. agency employees to UN organizations. In May 2006, the Secretary of State sent letters to the heads of 23 federal agencies urging that they review their policies for transferring and detailing employees to international organizations to ensure that these mechanisms are positively and actively promoted. While the Secretary’s letters may help to spur U.S. agencies to clarify their support for these initiatives, agency officials told us that their offices lacked the resources for staff details, which involve paying the salary of the detailed staff as well as “backfilling” that person’s position by adding a replacement. State also has been periodically meeting one-on- one with U.S. agencies to discuss the employment situation and recruiting efforts at specific UN organizations. A State official told us that State’s UN employment office meets with a few U.S. agencies per year to discuss UN agency staffing issues. Despite the new and continuing activities undertaken by State, U.S. representation in entry-level positions declined or displayed no trend in four of the five agencies we reviewed. U.S. representation in these positions declined at IAEA, UNHCR, and UNDP. The representation of Americans in entry-level positions at the Secretariat displayed no trend during the time period. At UNESCO, U.S. representation increased from 1.3 percent in 2003 to 2.7 percent in 2004, reflecting the time period when the United States rejoined the organization. We identified several options to target U.S. representation in professional positions, including the following: Maintaining a roster of qualified candidates. Prior to 2001, State had maintained a roster of qualified American candidates for professional and technical positions, but discontinued it. State officials told us that they have not maintained a professional roster, or the prescreening of candidates, despite the recent increase in staff resources, because maintaining such a roster had been resource intensive and because the office does not actively recruit for UN professional positions at the entry- and mid-levels. However, State acknowledged that utilizing new technologies, such as developing a Web-based roster, may reduce the time and cost of updating a roster. Other U.S. government and UN officials told us that some other countries maintained rosters of prescreened, qualified candidates for UN positions and that this practice was an effective strategy for promoting their nationals. In July 2007, State officials said that they began researching Internet-based options for compiling a roster of potential U.S. candidates. State estimated the cost to set up such a roster at about $100,000, but had not received funding for the roster. Expanding marketing and outreach activities. State had not taken steps that could further expand the audience for its outreach efforts. For example, while State had increased its coordination with other U.S. agencies on UN employment issues and distributed the biweekly vacancy announcements to agency contacts, U.S. agency officials that received these vacancy announcements told us that they lacked the authority to distribute the vacancies beyond their particular office or division. One official commented that State had not established the appropriate contacts to facilitate agency-wide distribution of UN vacancies, and that the limited dissemination had neutralized the impact of this effort. Several inter- agency task force participants also stated that no specific follow-up activities were discussed or planned between the annual meetings, and they could not point to any tangible results or outcomes resulting from the meetings. State also had not taken advantage of opportunities to expand the audience for its outreach activities. For example, State did not work with the Association of Professional Schools of International Affairs to reach potential candidates or advertise in some outlets that reach Peace Corps volunteers. In July 2007, State officials said they continue to outreach to new groups and attend new career fairs but have faced difficulty in identifying pools of candidates with the required skills and experience. Increasing and improving UN employment information on U.S. agency Web sites. State’s UN vacancy list and its UN employment Web site had limitations. For example, the list of vacancies was not organized by occupation, or even organization, and readers had to search the entire list for openings in their areas of interest. Further, State’s UN employment Web site had limited information on other UN employment programs and did not link to U.S. agencies that provide more specific information, such as the Department of Energy’s Brookhaven National Laboratory Web site. In addition, the Web site provided limited information or tools to clarify common questions, such as those pertaining to compensation and benefits. For example, the Web site did not provide a means for applicants to obtain more specific information on their expected total compensation, including benefits and U.S. income tax. Since we issued our report, State has added a UN pamphlet on benefits and compensation to its Web site. In July 2007, State officials told us they are exploring ways to improve the information available on UN compensation and benefits. For our 2006 report, we reviewed 22 additional U.S. mission and U.S. agency Web sites, and they revealed varying, and in many cases limited, information on UN employment opportunities. Overall, 9 of the 22 U.S. mission and agency Web sites did not have links to UN employment opportunities. Nearly 60 percent of the missions and agencies provided some information or links to information on salaries and benefits. We updated our analysis in July 2007 and found the situation had worsened somewhat. Eleven of the 22 U.S. mission and agency Web sites did not have links to UN employment opportunities and only about 50 percent of these Web sites provided some information or links to information on salaries and benefits. Analyzing the costs and benefits of sponsoring JPOs. The U.S. government sponsored JPOs at two of the five UN agencies that we reviewed, but had not assessed the overall costs and benefits of supporting JPOs as a mechanism for increasing U.S. representation across UN agencies. Among the five agencies, State had funded a long-standing JPO program only at UNHCR, sponsoring an average of 15 JPOs per year between 2001 and 2005. The Department of Energy’s Brookhaven National Laboratory also had supported two JPOs at IAEA since 2004. For four of the five agencies we reviewed, the percentage of individuals that were hired for regular positions upon completion of the JPO program ranged from 34 to 65 percent. In some cases, former JPOs were offered regular positions and did not accept them, or took positions in other UN organizations. The estimated annual cost for these positions to the sponsoring government ranged from $100,000 to $140,000 at the five UN agencies. State officials told us in July 2007 that they had not assessed the overall costs and benefits of supporting JPOs. Achieving equitable U.S. representation will be an increasingly difficult hurdle to overcome at UN organizations. Four of the five UN organizations we reviewed, all except UNESCO, will have to hire Americans in increasing numbers merely to maintain the current levels of U.S. representation. Failure to increase such hiring will lead the four UN organizations with geographic targets to fall below or stay below the minimum thresholds set for U.S. employment. As the lead department in charge of U.S. government efforts to promote equitable American representation at the UN, State will continue to face a number of barriers to increasing the employment of Americans at these organizations, most of which are outside the U.S. government’s control. For example, lengthy hiring processes and mandatory rotation policies can deter qualified Americans from applying for or remaining in UN positions. Nonetheless, if increasing the number of U.S. citizens employed at UN organizations remains a high priority for State, it is important that the department facilitate a continuing supply of qualified applicants for UN professional positions at all levels. State focuses much of its recruiting efforts on senior and policy-making positions, and U.S. citizens hold over 10 percent of these positions at four of the five agencies we reviewed. While State has increased its resources and activities in recent years to support increased U.S. representation overall, additional actions to facilitate the employment of Americans in entry- and mid-level professional positions are needed to overcome declining U.S. employment in these positions and meet employment targets. Because equitable representation of Americans employed at UN organizations has been a high priority for U.S. interests, we recommended that the Secretary of State take the following actions: provide more consistent and comprehensive information about UN employment on the State and U.S. mission Web sites and work with U.S. agencies to expand the UN employment information on their Web sites. This could include identifying options for developing a benefits calculator that would enable applicants to better estimate their potential total compensation based on their individual circumstances; expand targeted recruiting and outreach to more strategically reach populations of Americans that may be qualified for and interested in entry- and mid-level UN positions; and conduct an evaluation of the costs, benefits, and trade-offs of: maintaining a roster of qualified candidates for professional and senior positions determined to be a high priority for U.S. interests; funding Junior Professional Officers, or other gratis personnel, where Americans are underrepresented or in danger of becoming underrepresented. In commenting on a draft of our 2006 report, State concurred with and agreed to implement all of our recommendations. In July 2007, State officials updated us on the actions they have taken in response to our 2006 report recommendations. Mr. Chairman and Members of the Subcommittee, this concludes my prepared statement. I would be pleased to answer any questions that you may have. Should you have any questions about this testimony, please contact Thomas Melito, Director, at (202) 512-9601 or MelitoT@gao.gov. Other major contributors to this testimony were Cheryl Goodman, Assistant Director; Jeremy Latimer; Miriam Carroll; R.G. Steinman; Barbara Shields; Lyric Clark; Sarah Chankin-Gould; Joe Carney; and Debbie Chung. Martin De Alteriis, Bruce Kutnick, Anna Maria Ortiz, Mary Moutsos, Mark Speight, and George Taylor provided technical assistance. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | This testimony discusses ways to improve the representation of American professionals at United Nations (UN) organizations. The U.S. Congress continues to be concerned about the underrepresentation of American professionals employed by some UN organizations and that insufficient progress has been made to improve U.S. representation. The equitable representation of Americans at UN organizations is a priority to Congress in part because the United States is the largest financial contributor to most of these organizations. Moreover, according to the U.S. Department of State (State), Americans bring desirable skills, values, and experience that can have a significant impact on UN organizations' operational effectiveness. This testimony is based on a report that we issued on September 6, 2006. This testimoney will discuss (1) U.S. representation status and employment trends at five UN organizations, (2) factors affecting these organizations' ability to meet U.S. representation targets, and (3) State's efforts to improve U.S. representation and additional efforts that can be taken. The United States was underrepresented in three of the five UN agencies we reviewed, and increased hiring of U.S. citizens is needed to meet agreed-upon employment targets. Based on UN agencies' formal or informal targets for equitable geographic representation, U.S. citizens were underrepresented at IAEA, UNESCO, and UNHCR, and equitably represented at the UN Secretariat, though close to the lower end of its target range. UNDP had not established a target for U.S. representation, although U.S. citizens filled about 11 percent of the agency's professional positions. Given projected staff levels, retirements, and separations for 2006 to 2010, the Secretariat, IAEA, UNESCO, and UNHCR would need to hire more Americans than they have hired in recent years to meet their minimum targets for equitable U.S. representation in 2010. Summary While the UN agencies we reviewed faced some common barriers to recruiting and retaining professional staff, including Americans, they also faced distinct challenges. Most of these barriers and challenges were outside of the U.S. government's control. Six barriers common to UN agencies we reviewed included nontransparent human resource practices; a limited number of positions open to external candidates; lengthy hiring processes; comparatively low or unclear compensation; required staff mobility and rotation policies; and limited U.S. government support during Americans' efforts to obtain, or be promoted at, a UN job. These barriers combined with distinct agency-specific factors to impede recruitment and retention. For example, candidates serving in professional positions funded by their member governments were more likely to be hired by the Secretariat than those who took the Secretariat's entry-level exam; however, the United States had not funded such positions at the Secretariat. In addition, IAEA had difficulty attracting U.S. employees because the number of U.S. nuclear specialists was decreasing. State has increased its efforts to support the goal of achieving equitable U.S. representation at UN organizations, and additional options exist to target professional positions. State has targeted efforts to recruit U.S. candidates for senior and policymaking UN positions, and, although it was difficult to directly link State's efforts to UN hiring decisions, U.S. representation in senior and policymaking positions either improved or did not reflect a trend in each of the five UN agencies we reviewed. State also has undertaken several efforts to improve overall U.S. representation, including adding staff to its UN employment office and increasing coordination with other U.S. agencies that work with UN organizations. For positions below the senior level, State focused on "getting the word out" by, for example, disseminating information on UN vacancies through its Web site, attending career fairs and conferences, and other means. Despite these efforts, U.S. representation in entry-level positions declined or did not display a trend in four of the five UN agencies we reviewed. Additional options to target potential pools of candidates for professional positions include: maintaining a roster of qualified American candidates; expanding marketing and outreach activities; increasing UN employment information on U.S. agency Web sites; and conducting an assessment of the costs and benefits of sponsoring Junior Professional Officers (JPO), who are entry-level employees that are financially supported by their home government. |
Internal control represents an organization’s plans, methods, and procedures used to meet its missions, goals, and objectives and serves as the first line of defense in safeguarding assets and preventing and detecting errors, fraud, waste, abuse, and mismanagement. Internal control is to provide reasonable assurance that an organization’s objectives are achieved through (1) effective and efficient operations, (2) reliable financial reporting, and (3) compliance with laws and regulations. Safeguarding of assets is a subset of all these objectives. The term “reasonable assurance” is important because no matter how well-designed and operated, internal control cannot provide absolute assurance that agency objectives will be met. Cost-benefit is an important concept to internal control considerations. Internal control is very broad and encompasses all controls within an organization, covering the entire mission and operations, not just financial operations. One need only to look at GAO’s January 2005 High-Risk Series: An Update, in which we identify 25 areas of high risk for fraud, waste, abuse, and mismanagement, to see the breadth of internal control. While these areas are very diverse in nature, ranging from weapon systems acquisition to contract management to the enforcement of tax laws to the Medicare and Medicaid programs, all share the common denominator of having serious internal control weaknesses. In addition, as the Comptroller General testified before the House Committee on Government Reform last week, certain material weaknesses in internal control have contributed to our inability to provide an opinion on whether the consolidated financial statements of the U.S. government are fairly stated in conformity with U.S. generally accepted accounting principles. Internal control weaknesses are also at the heart of the over $45 billion in improper payments reported by the federal government in fiscal year 2004 across a range of programs. Further, internal control includes things such as screening of air passengers and baggage to help address the risks associated with terrorism, network firewalls to keep out computer hackers, and credit checks to determine the creditworthiness of potential borrowers. The Congress has long recognized the importance of internal control, beginning with the Budget and Accounting Procedures Act of 1950, over 50 years ago. The 1950 act placed primary responsibility for establishing and maintaining internal control squarely on the shoulders of agency management. As I will discuss later, the auditor can serve an important role by independently determining whether management’s internal control is adequately designed and operating effectively and making recommendations to management to improve internal control where needed. However, the fundamental responsibility for establishing and maintaining effective internal control belongs to management. In 1982, when faced with a number of highly publicized internal control breakdowns, the Congress passed FMFIA with a goal of strengthening internal control and accounting systems. This two-page law, a copy of which is in appendix I, defined internal control broadly to include program, operational, and administrative controls as well as accounting and financial management, and reaffirmed that the primary responsibility for adequate systems of internal control rests with management. Under FMFIA, agency heads are required to establish a continuous process for assessment and improvement of their agency’s internal control and to publicly report on the status of their efforts by signing annual statements of assurance as to whether internal control is designed adequately and operating effectively. Where there are material weaknesses, the agency heads are to disclose the nature of the problems and the status of corrective actions in an annual assurance statement. Today, agencies are generally meeting their FMFIA reporting requirement by including this information in their Performance and Accountability reports, which also include their audited financial statements. The act also required that the Comptroller General establish internal control standards and that OMB issue guidelines for agencies to follow in assessing their internal control against the Comptroller General’s standards. OMB first issued Circular A-123, then entitled Internal Control Systems, in October 1981, in anticipation of FMFIA becoming law. In December 1982, following FMFIA enactment, OMB issued the assessment guidelines required by the act. OMB’s Guidelines for the Evaluation and Improvement of and Reporting on Internal Control Systems in the Federal Government detailed a seven-step internal control assessment process targeted to an agency’s mission and organizational structure. The Comptroller General issued Standards for Internal Control in the Federal Government in 1983. These standards apply equally to financial and nonfinancial controls. In August 1984, OMB issued a question and answer supplement to its assessment guidelines, intended to clarify the applicability of the Comptroller General’s internal control standards and to assist agencies in assessing risk and correcting weaknesses. The 1990s brought additional legislation that reinforced the significance of effective internal control. The Chief Financial Officers (CFO) Act, which among other things provided for major transformation of financial management, including the establishment of CFOs, called for financial management systems to comply with the Comptroller General’s internal control standards. The Government Performance and Results Act of 1993 required agencies to clarify missions, set strategic and performance goals, and measure performance toward those goals. Internal control plays a significant role in helping managers achieve their goals. The Government Management Reform Act of 1994 expanded the CFO Act by establishing requirements for the preparation and audit of agencywide financial statements and consolidated financial statements for the federal government as a whole. The 1996 Federal Financial Management Improvement Act identified internal control as an integral part of improving financial management systems. These are just a few of the legislative initiatives over the years aimed at improving government effectiveness and accountability. The Congress has been consistent over the years in demanding that agencies have effective internal control and accounting systems. From the outset, agencies faced major challenges in implementing FMFIA. The first annual assessment reports were due by December 31, 1983. This time frame gave agencies a little over a year to develop and implement an agencywide internal control assessment and reporting process to provide the information needed to support the first agency head assurance statement to the President and the Congress. OMB assembled an interagency task force called the Financial Integrity Task Force and visited all federal departments and the 10 largest agencies to foster implementation of its internal control assessment guidelines. Starting in 1983, GAO monitored and reported on FMFIA implementation efforts across the government in a series of four reports from 1984 through 1989 as well as in numerous reports targeting specific agencies and programs. In our first governmentwide report, issued in 1984, we noted that although early efforts were primarily learning experiences, agencies had demonstrated a commitment to implementing FMFIA with a good start at assessing their internal control and accounting systems. We found agencies had established systematic processes to assess, improve, and report on their internal control and accounting systems, and we observed that federal managers had become more aware of the need for good internal control and improved accounting systems. OMB played an active role, providing guidance and central direction to the program. Though the nature and extent of participation varied, most inspectors general also played a major role in the first year. Our 1984 report outlined key steps to improve implementation, including adequate training and guidance, the importance of a positive attitude and a mind-set to hold managers accountable for results, and the need for more internal control testing. Our second governmentwide report in 1985 noted that FMFIA had provided a significant impetus to the government’s attempts to improve internal control and accounting systems by focusing attention on the problems. Agencies continued to identify material internal control and accounting system weaknesses with a number of major improvement initiatives under way. We identified needed improvements to FMFIA implementation similar to those in our 1984 report, but also identified the need to reduce the paperwork associated with agency assessment efforts. In particular, vulnerability assessments aimed at identifying the areas of highest risk in order to prioritize more detailed internal control reviews were widely criticized by agencies as paperwork exercises. It was widely thought that while agencies had devoted considerable resources assessing the vulnerability of thousands of operations and functions, these efforts did not provide management with a whole lot of reliable and useful information. Our third governmentwide report was issued in 1987. We noted that an important step in strengthening internal control is verifying that planned corrective actions have been implemented as envisioned and that the completed corrective actions have been effective. We found instances where (1) corrective measures taken had not completely corrected the identified weaknesses and (2) actions to resolve weaknesses had been delayed, in some cases for years. Our fourth governmentwide report, issued in 1989 for which the title, Ineffective Internal Controls Result in Ineffective Federal Programs and Billion in Losses, is still appropriate in today’s environment, concluded that while internal control was improving, the efforts were clearly not producing the results intended. We noted continuing widespread internal control and accounting system problems and the need for greater top-level leadership. We reported that what started off as a well-intended program to foster the continual assessment and improvement of internal control unfortunately had become mired in extensive process and paperwork. Significant attention was focused on creating a paper trail to prove that agencies had adhered to the OMB assessment process and on crafting voluminous annual reports that could exceed several hundred pages. It seemed that the assessment and reporting processes had, at least to some, become the endgame. At the same time, there were some important accomplishments coming from FMFIA. Thousands of problems were identified and fixed along the way, especially at the lower levels where internal control assessments were performed and managers could take focused actions to fix relatively simple problems. Unfortunately, many of the more serious and complex internal control and accounting system weaknesses remained largely unchanged and agencies were drowning in paper. In March 1989, GAO, along with representatives of seven agencies, OMB, and the President’s Council on Integrity and Efficiency (PCIE), reviewed aspects of FMFIA implementation as part of a subcommittee of the Internal Control Interagency Coordination Council. The subcommittee’s report highlighted the following seven issues as requiring action: Link the internal control assessment and reporting process with the budget to assist the Congress and OMB in analyzing the impact of corrective actions on agency resources. Emphasize the early warning capabilities of the internal control process to ensure timely actions to correct weaknesses identified. Consolidate the review processes of various OMB circulars to eliminate overlapping assessment requirements, improve staff utilization, and reduce the paper being generated. Provide for and promote senior management involvement in the internal control process to ensure more effective and lasting oversight and accountability for FMFIA activities. Highlight the most critical internal control weaknesses in the FMFIA assurance statements to increase the usefulness of the report to the President and the Congress. Report on agency processes to validate actions taken to correct material weaknesses, ascertain that desired results were achieved, and reduce the likelihood of repeated occurrences of the same weaknesses. Improve management awareness and understanding of FMFIA to provide for more consistent program manager interpretation and acceptance of the act. Too much process and paper continued to be a problem, and in 1995 OMB made a major revision to Circular A-123 that relaxed the assessment and reporting requirements. The 1995 revision integrated many policy issuances on internal control into a single document and provided a framework for integrating internal control assessments with other reviews being performed by agency managers, auditors, and evaluators. In addition, it gave agencies the discretion to determine which tools to use in arriving at the annual assurance statement to the President and the Congress, with the stated aim of achieving a streamlined management control program that incorporated the then administration’s reinvention principles. And this brings us to the present. The recent December 2004 update to Circular A-123 reflects policy recommendations developed by a joint committee of representatives from the CFO Council (CFOC) and PCIE. The changes are intended to strengthen the requirements for conducting management’s assessment of internal control over financial reporting. The December 2004 revision to the Circular also emphasizes the need for agencies to integrate and coordinate internal control assessments with other internal control-related activities. We support OMB’s efforts to revitalize FMFIA through the December 2004 revisions to Circular A-123. These revisions recognize that effective internal control is critical to improving federal agencies’ effectiveness and accountability and to achieving the goals that the Congress established in 1950 and reaffirmed in 1982. The Circular correctly recognizes that instead of considering internal control an isolated management tool, agencies should integrate their efforts to meet the requirements of FMFIA with other efforts to improve effectiveness and accountability. Internal control should be an integral part of the entire cycle of planning, budgeting, management, accounting, and auditing. It should support the effectiveness and the integrity of every step of the process and provide continual feedback to management. In particular, we support the principles-based approach in the revised Circular for establishing and reporting on internal control that should increase accountability. This type of approach provides a floor for expected behavior, rather than a ceiling, and by its nature, greater judgment on the part of those applying these principles will be necessary. Accordingly, clear articulation of objectives, the criteria for measuring whether the objectives have been successfully achieved, and the rigor with which these criteria are applied will be critical. Providing agencies with supplemental guidance and implementation tools is particularly important, in light of the varying levels of internal control maturity that exist across government as well as the expected divergence in implementation that is typically found when a range of entities with varying capabilities apply a principles-based approach. I would now like to highlight what I think will be the six issues critical to effectively implementing the changes to Circular A-123 based on the lessons learned over the past 20 years under FMFIA. First, OMB indicated that it plans to work with the CFOC and PCIE to provide further implementation guidance. For the reasons I just highlighted, we support the development of supplemental guidance and implementation tools, which will be particularly important to help ensure that agency efforts are properly focused and meaningful. These materials should demand an appropriate rigor to whatever assessment and reporting process management adopts as well as set the bar at a level to ensure that the objectives of FMFIA are being met in substance, with a caution to guard against excessive focus on process and paperwork. Supplemental guidance and implementation tools should be aimed at helping agency management achieve the bottom-line goal of getting results from effective internal control. Second, while the revised Circular A-123 emphasizes internal control over financial reporting, it will be important that proper attention also be paid to the other two internal control objectives covered by FMFIA and discussed in the Circular, which are (1) achieving effective and efficient operations and (2) complying with laws and regulations. Also, as I mentioned earlier, safeguarding assets is a subset of all three objectives. Third, managers throughout an agency and at all levels will need to provide strong support for internal control. As I discussed earlier, the responsibility for internal control does not reside solely with the CFO. A case in point is internal control over improper payments, which is the responsibility of a range of agency officials outside of the CFO operation. Also, with respect to financial reporting, which the revised OMB Circular A- 123 specifically refers to as a priority area, the CFO generally does not control all of the needed information and often depends on other business systems for much of the financial data. For example, at the Department of Defense (DOD), about 80 percent of the information needed to prepare annual financial statements comes from other business systems, such as logistics, procurement, and personnel information systems, that are not under the CFO. Fourth, agencies must strike an appropriate balance between costs and benefits, while at the same time achieving an appropriate level of internal control. Internal controls need to be designed and implemented only after properly identifying and analyzing the risks associated with achieving control objectives. Agencies need to have the right controls, in the right place, at the right time, with an appropriate balance between related costs and benefits. In this regard, the revisions to Circular A-123 outline the concept of risk assessment for internal control over financial reporting by laying out an assessment approach at the process, transaction, and application levels. A similar approach needs to be applied as well to the other business areas and the range of programs and operations as envisioned in FMFIA. Fifth, management testing of controls in operation to determine their soundness and whether they are being adhered to and to assist in the formulation of corrective actions where problems arise will be essential. This is another area covered by the revised Circular A-123. Testing can show whether internal controls are in place and operating effectively to minimize the risk of fraud, waste, abuse, and mismanagement and whether accounting systems are producing accurate, timely, and useful information. Through adequate testing, agency managers should know what is working well and what is not. Management will then be able to focus on corrective actions as needed and on streamlining controls if testing shows that existing controls are not cost-effective. Sixth, personal accountability for results will be essential, starting with top agency management and cascading down through the organization. Regular oversight hearings, such as this one, will be critical to keeping agencies accountable and expressing the continual interest and expectations of the Congress. Independent verification and validation through the audit process, which I will talk about next, is another means of providing additional accountability. There should be clear rewards (incentives) for doing the right things and consequences (disincentives) for doing the wrong things. If a serious problem occurs because of a breakdown in internal control and it is found that management did not do its part to establish a proper internal control environment, or did not act expeditiously to fix a known problem, those responsible need to be held accountable and face the consequences of inaction. The revised Circular A-123 encourages the involvement of senior management councils in internal control assessment and monitoring, which can be an excellent means of establishing accountability and ownership for the program. In initiating the revisions to Circular A-123, OMB cited the new internal control requirements for publicly traded companies that are contained in the Sarbanes-Oxley Act of 2002. Sarbanes-Oxley was born out of the corporate accountability failures of the past several years. Sarbanes-Oxley is similar in concept to the long-standing requirements for federal agencies in FMFIA and Circular A-123. Under Sarbanes-Oxley, management of a publicly-traded company is required to (1) annually assess the internal control over financial reporting at the company and (2) issue an annual statement on the effectiveness of internal control over financial reporting. The company’s auditors are then required to attest to and report on management’s assessment as to the effectiveness of its internal control. This is where Sarbanes-Oxley differs from FMFIA. FMFIA does not call for an auditor opinion on management’s assessment of internal control over financial reporting nor does it call for an auditor opinion on the effectiveness of internal control. Likewise, Circular A-123 does not adopt these requirements, although the Circular does recognize that some agencies are voluntarily getting an audit opinion on internal control over financial reporting. Our position is that an auditor’s opinion on internal control over financial reporting is similarly important in the government environment. We view auditor opinions on internal control over financial reporting as an important component of monitoring the effectiveness of an entity’s risk management and accountability systems. In practicing what we preach, we not only issue an opinion on internal control over financial reporting at the federal entities where we perform the financial statement audit, including the consolidated financial statements of the U.S. government, but we also obtain an auditor’s opinion on internal control on our own annual financial statements. On their own initiative, the Social Security Administration (SSA) and Nuclear Regulatory Commission also received opinions on internal control over financial reporting for fiscal year 2004 from their respective independent auditors. In considering when to require an auditor opinion on internal control, the following four questions can be used to frame the issue. 1. Is this a major federal entity, such as the 24 departments and agencies covered by the CFO Act? There would be different consideration for small simple entities versus large complex entities. 2. What is the maturity level of internal control over financial reporting? 3. Is the agency currently in a position to attest to the effectiveness of internal control over financial reporting and subject that conclusion to independent audit? 4. What are the benefits and costs of obtaining an opinion? What underlies these questions is whether management has done its job of assessing its internal control and has a firm basis for its assertion statement before the auditor is tasked with performing work to support an opinion on internal control over financial reporting. As I have stressed throughout my testimony today, internal control is a fundamental responsibility of management, including ongoing oversight. The auditor’s role, similar to its opinion on the financial statements issued by management, would be to state whether the auditor agrees with management’s assertion that its internal control is adequate so that the reader has an independent view. As an example, consider DOD which has many known material internal control weaknesses. Of the 25 areas on GAO’s high-risk list, 14 relate to DOD, including DOD financial management. Given that DOD management is clearly not in a position to state that the department has effective internal control over financial reporting, there would be no need for the auditor to do additional audit work to render an opinion that internal control was not effective. On the other hand, as I just mentioned for fiscal year 2004, SSA management reported that it does not have any material internal control weaknesses over financial reporting. The auditor’s unqualified opinion over financial reporting at SSA provided an independent assessment of management’s assertion about internal control, which we believe by its nature adds value and creditability similar to the auditor’s opinion on the financial statements. As you know, Mr. Chairman, recent legislation making the Department of Homeland Security (DHS) subject to the provisions of the CFO Act, which this Subcommittee spearheaded, requires DHS management to provide an assertion on the effectiveness of internal control over financial reporting for fiscal year 2005 and to obtain an auditor’s opinion on its internal control over financial reporting for fiscal year 2006. In addition, the CFO Council and PCIE are required by the DHS legislation to jointly study the potential costs and benefits of requiring CFO Act agencies to obtain audit opinions on their internal control over financial reporting, and GAO is to perform an analysis of the information provided in the report and provide any findings to the House Committee on Government Reform and the Senate Committee on Homeland Security and Governmental Affairs. We believe that the study and related analysis are important steps in resolving the issues associated with the current reporting on the adequacy of internal control. In addition, this issue is being discussed by the Principals of the Joint Financial Management Improvement Program—the Comptroller General, the Director of OMB, the Secretary of the Treasury, and the Director of the Office of Personnel Management. In closing, as the Congress and the American public have increased demands for accountability, the federal government must respond by having a high standard of accountability for its programs and activities. Areas vulnerable to fraud, waste, abuse, and mismanagement must be continually evaluated to ensure that scarce resources reach their intended beneficiaries; are used properly; and are not diverted for inappropriate, illegal, inefficient, or ineffective purposes. I want to emphasize our commitment to continuing our work with the Congress, the administration, the federal agencies, and the audit community to continually improve the quality of internal control governmentwide, and to help ensure that action is taken to address the internal control vulnerabilities that exist today. To that end, as I said earlier, the leadership of this Subcommittee will continue to be an important catalyst for change, and I again thank you for the opportunity to participate in this hearing. Mr. Chairman, this completes my prepared statement. I would be happy to respond to any questions you or other Members of the Subcommittee may have at this time. For information about this statement, please contact Jeffrey C. Steinhoff at (202) 512-2600 or McCoy Williams, Director, Financial Management and Assurance, at (202) 512-6906 or at williamsm1@gao.gov. Individuals who made key contributions to this testimony include Mary Arnold Mohiyuddin, Abe Dymond, and Paul Caban. Numerous other individuals made contributions to the GAO reports cited in this testimony. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Internal control is at the heart of accountability for our nation's resources and how effectively government uses them. This testimony outlines the importance of internal control, summarizes the Congress's long-standing interest in internal control and the related statutory framework, discusses GAO's experiences and lessons learned from agency assessments since the early 1980s, and provides GAO's views on the Office of Management and Budget's (OMB) recent revisions to its Circular A- 123. GAO highlights six issues important to successful implementation of the revised Circular, specifically, the need for supplemental guidance and implementation tools; vigilance over the broader range of controls covering program objectives; strong support from managers throughout the agency, and at all levels; risk-based assessments and an appropriate balance between the costs and benefits of controls; management testing of controls in operation to assess if they are designed adequately and operating effectively; and management accountability for control breakdowns. Finally, GAO discusses its views on the importance of auditor opinions on internal control over financial reporting. Internal control represents an organization's plans, methods, and procedures used to meet its missions, goals, and objectives and serves as the first line of defense in safeguarding assets and preventing and detecting errors, fraud, waste, abuse, and mismanagement. Internal control provides reasonable assurance that an organizations' objectives are achieved through (1) effective and efficient operations, (2) reliable financial reporting, and (3) compliance with laws and regulations. The Congress has long recognized the importance of internal control, beginning with the Budget and Accounting Procedures Act of 1950, which placed primary responsibility for establishing and maintaining internal control squarely on the shoulders of management. In 1982, when faced with a number of highly publicized internal control breakdowns, the Congress passed the Federal Managers' Financial Integrity Act (FMFIA). FMFIA required agency heads to establish a continuous process for assessment and improvement of their agency's internal control and to annually report on the status of their efforts. In addition the act required the Comptroller General to issue internal control standards and OMB to issue guidelines for agencies to follow in assessing their internal controls. GAO monitored and reported on FMFIA implementation efforts across the government in a series of four reports from 1984 through 1989 as well as in numerous reports targeting specific agencies and programs. With each report, GAO noted the efforts under way, but also that more needed to be done. In 1989, GAO concluded that while internal control was improving, the efforts were clearly not producing the results intended. The assessment and reporting process itself appeared to have become the endgame, and many serious internal control and accounting systems weaknesses remain unresolved as evidenced by GAO's high risk report which highlights serious long-standing internal control problems. In 1995, OMB made a major revision to its guidance that provided a framework for integrating internal control assessments with other work performed and relaxed the assessment and reporting requirements, giving the agencies discretion to determine the tools to use in arriving at their annual FMFIA assurance statements. OMB's recent 2004 revisions to the internal control guidance are intended to strengthen the requirements for conducting management's assessment of control over financial reporting. GAO supports OMB's recent changes to Circular A-123 and in particular the principles-based approach for establishing and reporting on internal control. GAO also noted six specific issues that are important to successful implementation of OMB's revised guidance and discusses its views on the importance of auditor opinions on internal control over financial reporting. |
In 1975, a new federal law, now called IDEA, established a federal commitment to identify children with disabilities and provide special education and related services such as speech and language services, psychological services, physical and occupational therapy, and transportation. The cornerstone principle of IDEA is the right of children with disabilities to have a free appropriate public education. Under the law, school districts must provide special education and related services without charge to parents and the services must meet the standards of the SEA. The services for and placement of each child must be based on the child’s unique needs, not on his/her disability. IDEA also stipulates that children with disabilities are to be educated in the “least restrictive environment,” that is, the law requires that children with disabilities are educated with children who are nondisabled to the maximum extent appropriate. About 13 percent of students in federally supported programs, or about 6.5 million children, receive special education services under IDEA. These students have a wide variety of needs that range from mild to severe. Children with speech or language impairments, specific learning disabilities, emotional disturbance, hearing impairments (including deafness), visual impairments (including blindness), orthopedic impairments, autism, traumatic brain injury, other health impairments, or mental retardation, and who need special education and related services are eligible under IDEA. School districts are responsible for identifying students who may have a disability and evaluating them in all areas related to the suspected disability. The evaluation process is intended to provide information needed to determine if the student is eligible as defined under IDEA. The IEP team decides on, among other things, special education and related services that will be provided for the child and on the frequency, location, and duration of the services to be provided. The law requires two steps for an IEP: (1) a meeting by the IEP team to agree about an educational program for a child with a disability and (2) preparing a written record of the decisions reached at the meeting. Development of the IEP is designed to facilitate communication between parents and school personnel and provide an opportunity for resolving any differences concerning the special education needs of a child. The IEP also documents a commitment of resources for providing special education and related services, and schools are responsible for ensuring that the child’s IEP is carried out as it was written. Disagreements over eligibility determinations about a child and over an IEP can be contentious and occasionally result in disputes. Many disagreements between families and local schools are resolved informally, during initial or follow-up IEP meetings at the local schools, or in other venues such as conferences with principals or other administrators. On occasion, however, parties have been unable to resolve their differences. In these instances, under IDEA, procedural safeguards afford parents recourse when they disagree with school district decisions about their children. Disagreements can be formally resolved through state complaint procedures, through a due process hearing, or through mediation. A state complaint is initiated through a signed written complaint that includes a statement that a public agency has violated a requirement of IDEA and the facts on which the statement is based. If the complaint is against a school district, the SEA typically informs the school district of the complaint by formal notification, requests documentation from the local education officials, and, when necessary, conducts an on-site investigation. The SEA must issue a written decision to the complainant that addresses each allegation in the complaint and contains findings of fact and conclusions, and the reason for the SEA’s final decision. If violations are found, the decision specifies the corrective actions to achieve compliance. A due process hearing is an administrative agency process initiated by a written request by one of the aggrieved parties to either the SEA or the LEA, depending on the state’s process. An impartial hearing officer listens to witnesses, examines evidence, and issues a written decision. In the decision, the hearing officer determines whether violations occurred and issues remedies. Mediation is a voluntary process whereby parents and school districts agree to meet with an impartial third party in an informal setting to reach a resolution that is mutually agreeable. Agreements are mutually designed, agreed to, and implemented by the parties. While the processes used for resolving disputes vary, other important key differences exist among these three mechanisms as well. Table 1 identifies some of the key differences in formal dispute resolution mechanisms offered by SEAs. Parents and other parties can generally choose which mechanism to use to resolve their dispute. Parents have the right to request a due process hearing at any time over any issue related to the identification, evaluation, or educational placement or the provision of a free appropriate public education to the student. They may also file both due process requests and written state complaints simultaneously, but the SEA must set aside any part of the state complaint that is addressed in the due process hearing until the conclusion of the hearing. Any issue in the complaint not addressed in the due process hearing must be resolved within the time frames and procedures consistent with the state complaint requirements. Finally, although either parents or school districts can file a request for a due process hearing, this mechanism is potentially very costly to both parties, in terms of financial expenses and relationships. While school districts and parents are responsible for their own attorney’s fees and other associated expenses, the hearing officer is paid for by the state. According to the National Association of State Directors of Special Education (NASDSE), due process systems are structured similarly across the states with one major distinction—about two-thirds of the states use a one-tier system in which the hearing is held only at the state level. About one-third of the states use a two-tier system in which a hearing occurs at a local level, usually the school or district with the right to appeal to a state- level hearing officer or panel. Even though the 1997 amendments to IDEA required states to make mediation available as a voluntary alternative to parents or school districts when they request a due process hearing, most states had mediation systems in place much earlier. In September 1994, NASDSE reported that Connecticut and Massachusetts were the first states (in 1975) to implement formal mediation systems. By 1985, 15 more states had implemented mediation, and over the next decade 22 additional states had mediation systems. Education’s Office of Special Education Programs (OSEP) is responsible for overall administration and allocation of federal funds for states’ implementation of IDEA programs. In addition, OSEP is charged with assessing the impact and effectiveness of state and local efforts to provide a free appropriate public education to children and youth with disabilities. OSEP has contracted for two major research studies that focus, in part, on dispute resolution activities. One of these, conducted by Abt Associates, the Study of State and Local Implementation and Impact of the Individuals with Disabilities Education Act (SLIIDEA) will include nationwide data over a 5-year period (2000-04); a report on selected findings was published in January 2003. The second study, SEEP, is being conducted by the American Institutes for Research. In May 2003, this project reported on procedural safeguards and related expenditures for dispute resolution from survey data of a nationwide sample of LEAs. Since states are not required to collect or report data on dispute resolution activity, these studies, along with two studies by NASDSE, provide the most recently available information on the prevalence of formal dispute activity. However, each of these studies has limitations, which are discussed in appendix I. Under IDEA, Education also provides funds to grantees for parent centers. The parent training and information centers and community parent resource centers provide a variety of services, including helping families obtain appropriate education and services for their children with disabilities, training and information for parents and professionals, connecting children with disabilities to community resources that can address their needs, and resolving problems between families and schools or other agencies. Each state has at least 1 parent center and, currently, there are 105 parent centers in the United States. According to the Technical Assistance Alliance for Parent Centers, the national coordinating office, parent centers provided assistance to nearly 1 million parents and professionals during the 2001-02 school year. Formal disputes between schools and families in the 4 states we visited ranged from identifying a student’s disabilities to developing and implementing the IEP and the student’s placement. Officials in these states told us that disputes frequently arose between families and school districts over (1) identifications, that is, whether children were eligible for IDEA services and how their eligibility determinations were made; (2) the types of special education and related services, if any, they needed; (3) whether schools carried out the education programs as written; and (4) whether schools could provide an appropriate educational environment for certain students. SEA and LEA officials told us that schools and parents occasionally disagreed about whether or not a child needed special education services. On the one hand, a school may want to evaluate a child because it believes he or she may have a disability and, in this case, the school must evaluate the child at no cost to the family. A parent may also ask for the child to be evaluated, but if the school does not think the child has a disability it may refuse to evaluate him or her. Parents who disagree must take appropriate steps to challenge the school’s decision. Conversely, for a variety of reasons, parents may not want the child to receive special education services. For example, the family may disagree with the school’s decision about whether or not the child has a disability, or the parent may be concerned about the possibility of negative perceptions about special education identification. Officials in five of the eight school districts we visited mentioned that disputes occurred because parents wanted or did not want their children identified for special education. Another issue that existed in some school districts was over the availability of related services, such as speech and language services and occupational therapy. Disputes sometimes occurred as a result of problems in providing related services, including the types, amounts, methods, or the failure to provide services. Speech and language services, for example, were mentioned as a recurring problem in some areas because of the shortage of specialists available to provide these services. Officials in six of the eight school districts we visited identified having disagreements with parents regarding the provision of speech and language services. SEA and LEA officials also identified a number of issues related to the IEP that caused disagreements between parents and school districts. For example, we were told that disagreements had occurred because parents believed the school had not implemented the IEP as agreed upon. Moreover, parents and schools also disagreed about whether the school had chosen the appropriate instructional methods for a child. For example, parents may want a child with autism to receive an intensive behavioral interventions program that consists of one-on-one instruction with a trained therapist. Because this type of instruction could be very costly to the LEA, school officials told us they would like the flexibility to consider a less expensive but suitable alternative approach as part of the student’s IEP. Officials in five of the eight school districts we visited mentioned that disputes with parents resulted from the instructional methods chosen or preferred by the school, particularly for students with autism. We also found that school officials and parents sometimes disagreed about whether a placement was the appropriate and least restrictive environment for a child. For instance, some educators have contended that a child should attend classes primarily for students with disabilities, while parents believed their children would perform better in a regular classroom. Some disputes about placement also resulted from the parents’ desire to have their children taught outside the public school system. Because serving a child outside the school district can be very expensive, school districts preferred, whenever feasible, to keep a child within the district. Officials in six local school districts we visited mentioned that disputes had occurred over decisions about a child’s educational setting. While national data on disputes are limited and inexact, the reported available information indicates that formal dispute resolution activity, as measured by the number of due process hearings, state complaints, and mediations, was generally low. According to a 2002 NASDSE report, the nationwide number of due process hearings held—the most expensive form of the three dispute resolution mechanisms—was generally low for a 5-year period that ended in 2000, with most hearings occurring in a few locations. Finally, based on national data from three studies, the rates of mediations and state complaints were also low, but somewhat higher than due process hearings. While the total number of due process hearings held nationally was low over a 5-year period from 1996-2000, most hearings were concentrated in a few locations. In April 2002, NASDSE reported that, over the 5-year period, requests for hearings steadily increased from 7,532 to 11,068. Because requests for due process hearings are frequently withdrawn or the parties resolve their issues through other means, most requests do not lead to formal hearings. NASDSE reported that the number of due process hearings held was low and had decreased from 3,555 to 3,020.’ We calculated that due process hearings occurred at a low rate of about 5 per 10,000 students with disabilities in 2000. (See fig. 1 for the number of hearings requested and held nationwide from 1996 through 2000.) However, while the number of due process hearings held nationwide decreased over the 5-year period, much of the decline occurred in New York, which experienced a substantial reduction in due process hearings held. Over the 5-year period, the number of due process hearings held in New York declined from 1,600 to 1,052. In addition, according to the NASDSE study, most due process hearings were held in a few locations. Nearly 80 percent of all hearings were held in 5 states—California, Maryland, New Jersey, New York, and Pennsylvania—and the District of Columbia. The rates of due process hearings per 10,000 students in these states ranged from 3 in California to 24 in New York; in the District of Columbia the rate was 336 due process hearings per 10,000. See figure 2 for the total numbers of due process hearings held in the 5 states and the District of Columbia compared with the rest of the nation over a 5-year period. Using data from its nationwide sample survey of the 1998-99 school year, SEEP reported that the prevalence of dispute activity among school districts varied by certain demographic characteristics. For example, the percentage of urban school districts that reported having at least 1 due process case—request or hearing—for the year was significantly higher than either suburban or rural districts (an estimated 50 percent, 20 percent, and 9 percent, respectively). Similarly, large school districts reported significantly more due process cases, compared with smaller districts. However, when the study made adjustments for the number of students served by examining the rate of due process cases per 10,000 special education students, no statistically significant differences were found in rates for either urbanicity or size. SEEP also analyzed due process data by district income levels and found a significant difference— an estimated 52 percent of the highest income school districts reported at least 1 due process case, 13 times the percentage of lowest income districts (4 percent). According to limited national data available from three studies, the rates of mediations and complaints per 10,000 students with disabilities were generally low, but somewhat higher than the rates of due process hearings. SLIIDEA reported that in the 1999-2000 school year more formal disputes between parents and schools were resolved through mediation than due process hearings. Based on survey results from all 50 states and the District of Columbia, this study reported that the median number of mediations for states was 4 for every 10,000 students with disabilities. The study also reported that 87 percent of the school districts surveyed said they did not have any mediation cases in the 1999-2000 school year. Two other studies also reported low numbers nationally of mediation cases and complaints. In May 2003, the SEEP study reported that 4,266 mediation cases were held during the 1998-99 school year, from which we calculated a rate of about 7 per 10,000 students. In February 2003, a NASDSE study reported that 6,094 complaints were filed nationwide during the 2000 school year or 2000 calendar year. Similar results were found in the SEEP study, which reported that 6,360 state complaints were filed during the 1998-99 school year. Given that roughly 6 million students with disabilities were served in these school years, we calculated that about 10 complaints were filed for every 10,000 students with disabilities. SEEP’s survey also revealed that an estimated 62 percent of districts reported having no cases involving complaints, due process hearings requested or held, or mediations during the school year. (See app. II for information on the levels of formal dispute resolution activity in the urban and rural school districts we visited.) In the 4 states in our review, and in Iowa, where we examined alternative dispute resolution strategies, officials told us they emphasized mediation in resolving disputes, and some locations had developed additional strategies for early resolution of disagreements between families and school districts. Officials saw mediation as a major resource for achieving agreements, strengthening relationships, resolving disputes more quickly, and reducing cost. The states we visited had implemented formal mediation by 1990 and, in varying degrees, exceeded minimum federal requirements by not tying it to a request for a due process hearing. In addition, 3 states we visited had established additional early dispute resolution strategies that were less formal and less adversarial. All 4 of the states we visited encouraged mediation as the mechanism for resolving disputes between schools and parents. All 4 states reported that parents and school districts could request mediation at anytime for any issue related to the identification, IEP development and implementation, placement, or the free appropriate public education of a student; but the degree to which it was specifically offered and used varied. In 2 of the states we visited, California and Massachusetts, mediation was used more frequently in dispute resolution in fiscal year 2002 than complaints and due process hearings combined. Mediation was used less often than state complaint procedures in Ohio and Texas, but both states had taken steps to expand their mediation programs. Table 2 provides the numbers of mediation cases over a 3-year period compared with complaints and due process hearings in the 4 states we visited. While mediation was used less often in Ohio and Texas, SEA officials in both states expected the numbers of mediations to increase with recent changes in their mediation systems. In Ohio, a state education official told us and advocates confirmed that concerns about the objectivity of the mediation process in that state had made parents reluctant to use the state mediation system. As of June 2003, the Ohio SEA had contracted with four mediators and was in the process of adding four more across the state, and the state also expected to provide on-going evaluation of the mediation process. In Texas, state officials told us they expected an increased reliance on mediation because the SEA had expanded the use of voluntary mediation as a means to resolve disputes quickly by offering it to parties involved in state complaints, although states are not required to offer mediation in conjunction with state complaints. Officials in all 4 states we visited said mediation offered benefits to all parties. Three of the 4 states reported that a high percentage of mediations resulted in agreements. The University of the Pacific reported that 93 percent of mediations in California resulted in agreements between families and schools during the 2001-02 fiscal year. Similarly, Massachusetts and Ohio reported success rates of 85 percent and 89 percent, respectively, for the same time period. Further, state education officials told us that mediation helped to foster communications between schools and parents and strengthen relationships. They also told us that mediations generally resolved disputes more quickly than state complaints or due process hearings. According to SEA officials in Ohio, for example, most mediations occurred within 2 weeks of the request. Texas state education officials also reported that mediations typically took place within 30 days upon receipt of the complaint. On the other hand, an administrator and some advocates told us that mediation agreements were not always implemented or enforced. However, no data were available on the extent to which this occurred. Additionally, 3 of the states reported that mediations were less costly than due process hearings. The Texas SEA estimated that over the past decade it had saved about $50 million in attorney fees and related due process hearing expenses by using mediation rather than due process hearings. The state also reported that it spent an average of $1,000 for a mediator’s services compared to $9,000 for a hearing officer’s services. Similarly, the University of the Pacific reported in January 2003 that in California, the average cost to the state for mediation was $1,800, while the average cost of a due process hearing was $18,600. These data are consistent with SEEP’s recent nationwide findings that of 4,312 districts reporting on cost- effectiveness, 96.3 percent of the respondents perceived mediation to be more cost-effective than due process hearings. All 4 of the states we visited had created additional opportunities for offering mediation as a means to resolve disputes. In Texas and Ohio, affected parties in a state complaint were immediately offered mediation to resolve their dispute. In Massachusetts, it was offered when parents and educators disagreed over a student’s proposed IEP and failed to reach consensus. These cases were automatically referred to the Bureau of Special Education Appeals for resolution, where mediation and due process hearings were offered. In fiscal year 2002, Massachusetts state officials estimated that approximately 10 percent of these IEP-related disputes resulted in mediation; most of the remaining cases were resolved less formally. In California, parties can request “mediation only” without filing a request for due process hearings. In this option, California state law specifically excludes attorneys—for parents or school districts—from participating. Although state and local education officials and advocates viewed the option as a viable and less adversarial alternative for dispute resolution, it was used in California 208 times, compared with 1,774 mediations tied to due process hearings in fiscal year 2002. States and localities we visited also used a variety of additional dispute resolution strategies that showed potential to help resolve disputes early, but limited data were available to assess their effectiveness. Iowa developed and promoted several strategies as part of a continuum of options for resolving disputes between parents and schools. One of these options, the Parent-Educator Connection, was created to resolve differences between parents and schools at the earliest point. This effort was designed to provide each of the 15 area education agencies with staff who were trained in conflict resolution. These parent-educator coordinators attended meetings, including IEP meetings at either parent or educator request. According to an SEA official, parent-educator coordinators attended 896 meetings during the 2001-02 fiscal year. Another option focused on increasing the availability of individuals with mediation skills to resolve more serious conflicts between parents and schools. These individuals, called resolution facilitators, were often regional education staff who were trained to assist families and schools in resolving their differences by discussing the problems and helping the parties work toward an acceptable agreement before it resulted in a more formalized dispute that involved the SEA. According to these state officials, another goal of the program is to teach others, including administrators, educators, and parents, about mediation, negotiation, and conflict resolution. In 2001, 238 participants, including 65 parents, received training, but no data had been collected by the SEA about how often resolution facilitators were used or about the results of informal mediation processes that had occurred. Iowa also promoted the availability of a somewhat more formal mediation called a pre-appeal conference that was not tied to a request for a due process hearing. According to state officials, the rationale for establishing the pre-appeal conference was to allow the parties another opportunity to resolve their dispute early before it became acrimonious and a formal request for a due process hearing was filed. Officials told us that the pre- appeal conference was conducted in a similar manner to mediation, that is, in connection with a due process hearing. In 2002, the pre-appeal conference was used five times more often by families and educators than mediation and usually resulted in an agreement. Iowa advocated and actively promoted the availability of the pre-appeal conference and resolution facilitators to educators and parents. In Ohio, the state funded a pilot parent mentor program whereby parents of students with disabilities were hired to help school districts and other families by providing training, support, and information services. One of their most important duties was to attend IEP meetings and other meetings at parent or school staff request. While no data were available on the cost-effectiveness of this program in resolving disputes at the local level, the state increased funding for the program and expanded the number of parent mentors from 10 pilot sites in 1990 to 70 project sites that afford 96 parent mentors for approximately one-third of Ohio’s school districts. During the 2001-02 school year, parent mentors attended 2,685 IEP meetings and had contact with 12,538 families of the 239,000 students with disabilities in Ohio. California has an alternative dispute resolution grant program that provided limited funding in 2001 to 18 of the 119 regional education agencies within the state to establish strategies to prevent or address disagreements. Each region, typically consisting of more than one school district, selected several strategies and developed its own program for dispute resolution. One of these strategies, called facilitated IEPs, was used by 12 regions and involved one school district borrowing an expert trained in mediation from another school district to facilitate the IEP meeting. To become facilitators, staff participated in 4-day training programs that emphasized facilitation skills within an IEP process. The training was intended to provide facilitators with the tools to conduct IEP meetings in a way that enabled the team to (1) focus the IEP content and process on students’ needs, (2) use a collaborative process, (3) build and improve relationships, and (4) reach consensus. An overall goal of this alternative grant program was to reduce the numbers of due process hearings requested in certain areas of the state. While there were no impact data for this program or any of the other strategies, 12 of the regions that participated showed an overall decrease of 42 percent in requests for due process hearings from 2001 to 2002. In general, officials in the school districts we visited told us they had few problems with responding to state complaint notifications. The problems they encountered had little impact on the timeliness of the complaint process; state and local education officials appeared to be working together to overcome them. According to the local school district officials we interviewed, complaint notifications generally provided sufficient information to allow them to respond within the states’ required time frames. Both the state and local officials told us the amount of time local school districts were given to respond to the notification letter ranged from 3 to 10 days. To allow them to respond to complaints, the notification letter typically (1) identifies the student, (2) identifies the student’s school, (3) describes the nature of the complaint, and (4) specifies the relevant documents needed for the state to resolve a complaint and conduct an independent on-site investigation, if determined necessary. Los Angeles Unified School District officials said they experienced a few problems with notifications because on occasion, the state did not include the supporting documentation for the complaint, such as a copy of the relevant IEP or evaluation along with the notification letter. Also, these school district officials told us that the notification sometimes did not include the name of the school or the child’s date of birth, which initially made it difficult to identify the student. While these problems may have resulted in several days’ delay, Los Angeles Unified School District officials said that some of these administrative issues will be resolved once the district has implemented its Web based IEP system, which it expects to complete in January 2004. According to an SEA official, the state was generally flexible and allowed the school district additional time to provide the requested documents. In addition, officials of the Austin (Texas) Independent School District and Hamilton (Ohio) Local School District told us the state notification included a summary of the parents’ allegations. However, they were sometimes unable to discern the nature of the parents’ complaint from this account. To better understand the nature of the parents’ complaint, Austin school district officials formally requested a copy of the parent’s signed letter from the SEA. According to SEA officials, Texas had recently begun to include a copy of the parents’ letter as part of the notification. Overall, the numbers of formal disputes between parents and school districts were generally low compared to the 6.5 million students between 3 and 21 years old served during the 2001-02 school year, but the thousands of disputes that occur threaten relationships and can result in great expense. The concentration of due process hearings in a few localities suggests that many factors may well be at play, including local attitudes about conflict, when parents or others dispute a school district’s decisions. The states we visited viewed mediation as a valuable tool for parents and schools to resolve many disputes before they become acrimonious. The fact that the states we visited were emphasizing mediation and made it more widely available than IDEA requires—along with other options for early dispute resolution—may hold promise for reducing contentious and expensive forms of dispute resolution, such as due process hearings. We provided a copy of this report to Education for its review and comment. Agency comments are reprinted in appendix III. Education agreed with our findings and stated that the report would be of great interest and highly relevant to the present congressional consideration of IDEA. Education said that it will assess the administration of dispute resolution procedures in the six high incidence jurisdictions identified in our report through a combination of monitoring and technical assistance. Education also provided technical comments, which we incorporated as appropriate. Copies of this report are being sent to the Secretary of Education, appropriate congressional committees, and interested parties. Copies will be made available to others upon request. The report is also available on GAO’s Web site at http://www.gao.gov. Please contact me on (202) 512-7215 if you have any questions about this report. Other GAO contacts and staff acknowledgments are listed in appendix IV. In conducting our review, we obtained and analyzed information from the Department of Education, state education agencies (SEA), local school districts, and the McGeorge School of Law at the University of the Pacific. We visited 5 states, and in 4 of these states we interviewed staff from SEAs and local education agencies (LEA), including one urban and one rural school district in each state—for a total of eight school districts. At the district level, we performed our fieldwork at the Los Angeles Unified and Salinas Union High School Districts in California, Boston Public and Southbridge School Districts in Massachusetts, Cleveland Municipal and Hamilton Local School Districts in Ohio, and Austin Independent and Goliad Independent School Districts in Texas. In selecting the states for our fieldwork, we considered states that (1) varied in volume of formal dispute resolution activity, (2) used one- or two-tier due process hearing systems, (3) had developed alternative dispute resolution strategies, (4) were visited by the Department of Education’s Office of Special Education Programs over the past few years, (5) included large urban school districts, and (6) were geographically diverse. We met with SEA officials in Iowa because the state was identified by experts in the area for having innovative strategies in alternative dispute resolution. In addition, we met with representatives of other professional organizations, including the National Association of State Directors of Special Education and the Consortium for Appropriate Dispute Resolution in Special Education. We also interviewed members of parent resource and advocacy groups (federally funded and other nonprofits) in each of the states visited; these organizations employed parents of children with disabilities and we obtained their views. To identify what kinds of issues resulted in formal disputes between parents and school districts, we interviewed state and local education officials, parent resource and advocacy groups, and obtained data during our site visits to California, Massachusetts, Ohio, and Texas. To determine how much formal dispute resolution activity occurred, we collected and reported data from each of the 4 states and reviewed and reported the results of four nationwide surveys, each affected by different data and research limitations: Dispute Resolution Procedures, Data Collection, and Caseloads Study. This study was conducted by the National Association of State Directors of Special Education (NASDSE) of state dispute resolution activities between February and April 1999. All 50 SEAs responded and provided some information on their state dispute resolution systems, including data on complaints, mediations, and due process hearings for the 1999-2000 school year or 2000 calendar year. The study authors cautioned that the state data are variable and are gathered and recorded using different approaches. Also, to make the data more usable, when data were unavailable, the missing data were replaced by data from the previous year’s experience or data were calculated and derived from the 30 states with complete information. Due Process Hearings: 2001 Update. Annually, Project FORUM at NASDSE surveyed special education directors to obtain nationwide data on the numbers of due process hearings requested and held over a 10-year period (1991 through 2000). The most recent survey also obtained information on the due process hearing systems used in all states and the District of Columbia to determine whether they were one- or two-tier hearing systems. The study authors noted that because state data vary in the way data are collected and maintained, the data were reported as a comparison of annual incidence even though the specific year of collection does not cover the same months. Further, short-term changes in the reporting of these data might be due to factors other than a change in a state’s policy. But, multiyear changes in national totals could indicate trends in the due process system. Study of State and Local Implementation and Impact of the Individuals with Disabilities Education Act (SLIIDEA). This study, funded by the Office of Special Education Programs (OSEP), is collecting data over a 5-year period by means of mailed surveys at the state, district, and school levels, and through case studies of the implementation of the Individuals with Disabilities Education Act (IDEA) in selected school districts on selected topics. In January 2003, one report was issued from this study, Final Report on Selected Findings, and included data on due process hearings and mediations. This report provides data on dispute resolution activity obtained from state and district surveys that were administered during the 1999-2000 school year. The estimates of dispute activity in this study were based on a survey of school districts with a response rate of 31 percent. Abt Associates conducted a nonresponse survey with eight of the original survey items primarily by telephone to determine potential bias between survey respondents and nonrespondents. Because some differences were found, data from the nonresponse survey were used to adjust the estimates based on the originally interviewed districts. An assumption was made in this analysis that nonresponding districts are similar to responding districts in the way they would answer the survey items. What Are We Spending on Procedural Safeguards in Special Education, 1999-2000? This series of reports is based on descriptive information derived from the Special Education Expenditure Project (SEEP), a national study funded by OSEP and conducted by the American Institutes for Research. This report provides estimates of school district expenditures on special education mediation, due process, and litigation activities for the 1999-2000 school year; the prevalence of dispute activity (state complaints, mediations, due process hearings, and litigations) for the 1998-99 school year; demographic characteristics of school districts with and without dispute activity; and other related information. Data were collected from a sample of school districts to generalize to all districts in the 50 states and the District of Columbia. However, no overall response rate was cited in this study. Therefore, the level of nonresponse and its effect on data quality are unknown from this survey. Because of the survey design, the SEEP data on due process hearings did not distinguish between due process hearings requested and due process hearings held. Also, the SEEP data may include dispute resolution activity in addition to the procedural safeguards under IDEA, such as those provided for by Section 504 of the Rehabilitation Act of 1973, as well as other activities made available by states. Inconsistencies in the way states define and collect data on dispute resolution activities could affect the validity of the estimates, as well as make between-state comparisons difficult. No national reporting system exists to identify and quantify the various causes of special education disputes or the prevalence of dispute resolution activity among the states. States have developed their own database systems that have a wide variety of categories and definitions of disputes with many different allowable entries. For example, in cases that involved the simultaneous filing of a state complaint and a request for mediation, some SEAs only record the procedure that was used to resolve the dispute while other states record both the filed complaint and request for mediation. As a result, the data reported in national studies as well as that reported from our site visits are of varying quality, resulting in inexact numbers of dispute resolution activity. To determine what mechanisms (formal and informal) were used to resolve disagreements, we interviewed state education officials and local school district administrators and obtained and reviewed documents that described these mechanisms. To determine whether LEAs had problems responding to dispute notifications from states, we interviewed special education administrators in each of the eight school districts we visited and reviewed SEA procedures and related documents. We conducted our work between November 2002 and September 2003 in accordance with generally accepted government auditing standards. The school districts we visited varied in their use of the dispute resolution mechanisms, and generally reflected the national trends in that complaints and mediation were used more often than due process hearings to resolve disputes between families and schools. Table 3 summarizes the levels of formal dispute resolution activity over a 3-year period in the eight school districts we visited in California, Massachusetts, Ohio, and Texas. The following people also made important contributions to this report: Ellen Soltow, Susan Bernstein, Luann Moy, Kris Braaten, Roger Thomas, and Richard Burkard. | In the 2001-02 school year, about 6.5 million children aged 3 through 21 received special education services under the Individuals with Disabilities Education Act (IDEA). On occasion, parents and schools disagree about what kinds of special services, if any, are needed for children and how they should be provided. Conflicts between school officials and families sometimes become costly, both financially and in terms of the harm done to relationships. As requested, GAO determined the kinds of issues that result in formal disputes, the extent to which the three formal mechanisms (due process hearings, mediations, and state complaints) are employed for resolution, the role of mediation and other alternative dispute resolution strategies in selected locations, and whether local education agencies received adequate and timely complaint notifications from states. To address these objectives, GAO reviewed available national data and conducted site visits to state and local education agencies in four states--California, Massachusetts, Ohio, and Texas. Officials in four states told GAO that disagreements usually arose between parents and school districts over fundamental issues of identifying students' need for special education, developing and implementing their individualized education programs, and determining the appropriate education setting. While national data on disputes are limited and inexact, the available information showed that formal dispute resolution activity, as measured by the number of due process hearings, state complaints, and mediations, was generally low. According to the National Association of State Directors of Special Education, while requests for hearings increased from 7,532 to 11,068 over a 5-year period, the number of due process hearings held decreased from 3,555 to 3,020; much of the 5-year decline occurred in New York. Additionally, most due process hearings were concentrated in five states--California, Maryland, New Jersey, New York, and Pennsylvania--and the District of Columbia. Overall, dispute resolution activity was generally low relative to the number of students with disabilities. About 5 due process hearings were held per 10,000 students with disabilities. National studies also reported no more than an estimated 7 mediations per 10,000 students and about 10 state complaints per 10,000 students. States GAO visited emphasized mediation in resolving disputes and made it more available than federal law required. Some locations had developed additional strategies for early resolution of disagreements between parents and school districts. Finally, school district officials in the four states said they had few problems with state complaint notifications, and problems encountered had little impact on the timeliness of the complaint process: state and local education officials appeared to be working together to overcome them. |
Management and employees should establish and maintain an environment throughout the organization that sets a positive and supportive attitude toward internal control and conscientious management. A positive control environment is the foundation for all other standards. It provides discipline and structure as well as the climate which influences the quality of internal control. GAO’s Standards for Internal Control in the Federal Government (GAO/AIMD-00-21.3.1, November 1999) We found overall control environment weaknesses at the four case study locations we audited as well as indications of similar weaknesses in our Air Force-wide analysis that contributed to breakdowns in key control activities and potentially fraudulent, improper, and abusive purchase card transactions. For example, we encountered numerous instances where supporting documentation was not available because installations destroyed purchase card records on a rolling 1-year basis due to faulty records retention guidance in the Air Force purchase card Instruction that did not comply with federal guidelines. We also found weaknesses in the areas of (1) the number of cardholders and accounts, (2) approving official span of control, (3) credit limits compared to historical spending, (4) documentation of cardholder and approving official training, (5) implementation of audit and internal review recommendations, and (6) accountability and disciplinary action. Further, given the magnitude of the purchase card program at the installations we audited, we found the human capital infrastructure for program monitoring and oversight to be inadequate. As discussed in a later section of this report, these control environment weaknesses have contributed to fraudulent, improper, and abusive purchase card activity. We also found some positive aspects of the Air Force purchase card program, such as Air Force-wide purchase card operating procedures and aggressive Air Force Audit Agency reviews of installation purchase card programs since 1996. The importance of the role of management in establishing a positive internal control environment cannot be overstated. GAO’s Standards for Internal Control in the Federal Government, discusses management’s key role in demonstrating and maintaining an organization’s integrity and ethical values, especially in setting and maintaining the organization’s ethical tone, providing guidance for proper behavior, and removing temptations for unethical behavior. During our audit of Air Force fiscal year 2001 purchase card activity, we encountered numerous instances where supporting documentation was not available because installations destroyed purchase card records on a rolling 1-year basis due to faulty records retention guidance in the Air Force purchase card Instruction. Federal records retention requirements in the Federal Acquisition Regulation, the General Records Schedule established by the National Archives and Records Administration (NARA), and DOD’s Financial Management Regulation require records supporting program and contracting activity to be retained for 3 years and records supporting financial transactions to be retained for 6 years and 3 months. However, the records retention guidance in Air Force Instruction 64-117 called for documentation received and generated by the cardholder, such as vendor invoices, sales receipts, and shipping reports, and cardholder logs to be maintained for only 1 year after the final payment. In addition, 36 C.F.R. 1228.30, which covers disposition of federal records, requires agencies to establish a schedule for disposing of their records and obtain approval of their records disposition schedules from the NARA. We saw no evidence that the Air Force developed a records disposition schedule related for purchase card records. While missing records affected all aspects of purchase card controls, the greatest impact was on cardholder and approving official appointments and training and cardholder delegations of purchasing authority. Further, we found that cardholders at Wilford Hall Medical Center, located at Lackland Air Force Base (AFB), did not adhere to the Air Force requirement to retain purchase card transaction records for even the 1-year period. As discussed in the next section of this report on tests of key controls, 52 of the 152 transactions in our Lackland AFB sample were for Wilford Hall purchase card activity, and required supporting documentation was not available for 23 of these 52 transactions. The Air Force Instruction also called for documentation generated by the installation program coordinator and approving officials, such as records of training, delegations of authority, and surveillances, to be retained only as long as the cardholder and approving official are performing that function. Air Force headquarters officials told us that the Air Force Instruction, which is currently being revised, would include corrections to the records retention guidelines. While the number of Air Force purchase cardholders peaked at about 80,000 cardholder accounts in September 2001, the overall number of cardholders from October 2000 through September 2002 has remained about the same. As of September 2002, the Air Force reported that it had about 77,000 purchase card accounts—translating to about 1 purchase card for every 7 employees. In contrast, the Navy had reduced the number of its purchase cardholders from about 52,000 to about 23,000 and only about 1 of every 31 employees was a purchase cardholder. We determined that the Air Force did not have specific policies governing the number of cards to be issued or criteria for identifying employees eligible for the privilege of cardholder status. Purchase cards were given out on the basis of a request from an individual employee’s unit commander. The request was then forwarded to the installation’s purchase card Agency program coordinator, who approved the request and began the process for obtaining a new card from U.S. Bank. In addition to an excessive number of cardholders, we found that some Air Force cardholders had as many as 10 government purchase cards. The Air Force permits cardholders to have numerous purchase card accounts to facilitate accounting for purchase card expenditures related to different funding sources—appropriated and nonappropriated funds—as well as different accounting lines within the same fund source. Assigning multiple purchase cards is not an appropriate method of accounting for purchase card transactions. Further, this practice places substantial credit risk in the hands of one individual. In response to concerns about approving official span of control raised during our initial Navy purchase card work, the Director of DOD’s Purchase Card Joint Program Management Office issued a memorandum on July 5, 2001, that called for no more than five to seven cardholders per approving official. During fiscal year 2002, the Air Force established goals for reducing the number of cardholder accounts assigned to approving officials. Our analysis of span of control ratios at the four installations we audited disclosed that when looking at average ratios, the Air Force has adhered to the DOD span of control guidelines. However, as shown in table 1, these averages have masked the wide range of ratios across each Air Force installation—some of which exceeded the DOD guidelines. As of August 2002, we found that the four installations we audited had from 22 to 32 approving officials that were responsible for more than 7 cardholder accounts, including four approving officials at two installations—Travis AFB and Edwards AFB—that were responsible for more than 20 cardholder accounts. As shown in table 1, the percentage of approving officials whose span of control exceeded DOD guidelines ranged from 18 percent to 32 percent at the four Air Force locations we audited. Our analysis of purchase card spending compared with credit limits showed that monthly credit limits for the four case study locations far exceeded their actual monthly spending. Limiting credit available to cardholders is a key factor in managing the purchase card program and in minimizing the government’s financial exposure. On August 13, 2001, DOD’s Director of Defense Procurement sent a memorandum to the directors of all defense agencies stating that supervisors should set reasonable limits based on what each person needs to buy as part of his or her job and that every cardholder does not need to have the maximum transaction or monthly credit limit. Air Force officials told us that their credit limits appear excessive because these limits include credit limits of primary and alternate, or backup, approving officials and cardholders. The officials explained that they assign primary and alternate approving officials and cardholders to local units to ensure that purchases can continue when primary approving officials or cardholders are on leave, assigned to temporary duty locations, or deployed. The officials also told us that alternate approving officials and cardholders are given the same credit limits as the primary approving officials and cardholders. The Director of DOD’s Purchase Card Joint Program Management Office told us that DOD guidelines suggest that credit limits for inactive accounts, including alternate accounts, should be reduced to $1. However, Air Force officials told us that they do not deactivate alternate accounts or reduce credit limits during periods when alternate purchasing authority is not needed due to frequent schedule changes and the administrative burden associated with turning accounts on and off. As shown in table 2, total financial exposure as measured in terms of purchase card credit limits substantially exceeded historical purchase card spending. Air Force officials told us that to better control cardholder spending limits, they worked with U.S. Bank to develop an automated control that will tie quarterly authorizations of budget authority to cardholder credit limits. According to Air Force officials, this control was implemented during fiscal year 2002. Effective management of an organization’s workforce—its human capital—is essential to achieving results and an important part of internal control. Training should be aimed at developing employee skill levels to meet changing organizational needs. GAO’s Standards for Internal Control in the Federal Government (GAO/AIMD-00-21.3.1, November 1999) Documentation of cardholder and approving official training and authority varied widely across the four locations we audited. For example, we found that two of our four case study locations had maintained documentation on cardholder and approving official appointments, cardholder delegations of purchasing authority, and cardholder and approving official training. However, at Edwards AFB and Travis AFB, our test results indicated that 12 percent and 51 percent, respectively, of their fiscal year 2001 purchase card transactions were made by cardholders and/or approving officials who had no documentation showing they had received initial training. Further, none of the four case study locations we audited had adequate documentation to show that both cardholder and approving official training was current for the transactions in our sample. Air Force Instruction 64-117 requires commanders or chiefs to prepare a Letter of Appointment designating the proposed cardholder and approving official and identifying their name, rank, duty title, telephone number, and e-mail address; the types of purchases to be made; and funds to be used to pay for the purchase card purchases. The installation program coordinator is to coordinate single and monthly purchase limits with the designated billing official and forward documentation to the bank to set up the purchase card account. While the Instruction provides for denial of appointments, it does not provide criteria or qualifications for who may be a cardholder or approving official. The Instruction also requires that prior to establishing a purchase card account and issuing a purchase card, all prospective cardholders and approving officials must receive training on requirements in the Federal Acquisition Regulation and Air Force purchase card and acquisition policies and procedures. Once initial training is received, the Instruction requires all cardholders to receive supplemental training in the form of annual refresher training and it implies that approving officials should have refresher training. The Air Force Instruction requires installation program coordinators to maintain documentation of approving official and cardholder purchase card appointments and training and cardholder delegations of purchasing authority as long as these individuals are serving in those capacities. Contracting officials at Travis AFB explained that in accordance with Air Force Instruction 64-117, they did not retain records of appointments, training, and delegations of authority for approving officials and purchase cardholders who were no longer performing those duties or were no longer at Travis AFB. We also found that Travis AFB did not maintain central files to document cardholder and approving official appointments and cardholder delegations of purchasing authority, and documentation of training was incomplete or could not be located by individual units at that installation. In addition, Edwards AFB contracting officials told us that they did not retain cardholder and approving official appointment letters. Table 3 shows the results of our statistical tests for documentation of appointments, delegations of purchasing authority, and training for cardholders and approving officials associated with the transactions in our sample. Although Edwards AFB and Travis AFB officials asserted that their cardholders and approving officials had received required purchase card training, we found numerous improper purchase card transactions related to failure to follow federal guidelines on micropurchases and mandated sources of supply that indicate that cardholders and approving officials either had not been trained, training was inadequate, or cardholders were not following guidelines addressed in the training classes. For example, we found that cardholders at Edwards AFB were unaware that they are required to obtain competitive price quotes from three different vendors when their purchases exceed the $2,500 micropurchase threshold. We also found that Travis AFB cardholders frequently were not following these guidelines. Further, we found that the Edwards AFB installation program coordinator routinely waived micropurchase and cardholder transaction limits to permit cardholders to purchase goods and services at higher amounts. We also found that Edwards AFB cardholders frequently did not follow federal guidelines on mandated sources of supply and Travis AFB cardholders did not notify property book officers when they used the purchase card to buy computer equipment and other pilferable property. In addition, we were unable to determine whether cardholder and approving official training was current due to missing documentation and the use of E-mail notices, bulletins, and newsletters used to update purchase card training at three case study locations. Travis AFB lacked documented evidence that most of its cardholders and approving officials had received annual refresher training and the other three case study locations could not document informal training that was provided outside of classrooms. During fiscal year 2002, due to concerns about whether approving officials and cardholders had received adequate refresher training, Edwards AFB and Nellis AFB began using classroom training and sign-in sheets to document annual purchase card refresher training for both approving officials and cardholders. Monitoring of internal control should include policies and procedures for ensuring that the findings of audits and other reviews are promptly resolved. GAO’s Standards for Internal Control in the Federal Government (GAO/AIMD-00-21.3.1 November 1999) We found that the four case study locations conducted annual purchase card internal reviews, called surveillances, and the Air Force Audit Agency performed aggressive reviews; however, we also identified significant repeat findings indicating a lack of commitment to adhere to purchase card regulations and DOD and Air Force policies and procedures. In addition, at one of the four case study locations we audited—Edwards AFB—internal reviewers did not disclose all identified problems in the reports on surveillance results. The Air Force Audit Agency reported similar findings in its reviews of fiscal year 2000 installation-level purchase card activity. The Air Force auditors concluded that without enforcement of purchase card requirements, such as administratively disciplining the offenders or revoking purchase card privileges, purchase card discrepancies will likely continue. The Air Force Audit Agency reported that inaction by contracting officials reduced the effectiveness of other purchase controls, increased transaction errors, and undermined the Air Force purchase card goals. As stated in GAO’s Standards for Internal Control in the Federal Government, monitoring of internal control should include policies and procedures for ensuring that the findings of audits and other reviews are promptly resolved. Managers are to (1) promptly evaluate findings from audits and other reviews, including those showing deficiencies, and recommendations reported by auditors and others who evaluate agency operations, (2) determine proper actions in response to findings and recommendations from audits and reviews, and (3) complete, within established time frames, all actions that correct or otherwise resolve the matters brought to management’s attention. The Air Force purchase card Instruction, which was issued in December 2000, requires the installation purchase card program coordinator to perform a surveillance (internal review) of each approving official’s billing account and 25 percent of the cardholders’ accounts at least every 12 months. The Air Force Instruction includes an optional surveillance checklist that covers key purchase card requirements, including most of the control activities covered in our audit. Our review of the surveillances associated with the approving officials and cardholders responsible for the transactions in our case study samples showed that many of the problems identified by installation internal reviewers were consistent with the problems we found in our audit. For example, these surveillance results identified failure to adhere to Air Force guidance on $2,500 micropurchase limits, purchases from required sources, failure of billing officials to review their cardholders’ accounts, prohibited purchases, and failure to maintain purchase logs. Further, we determined that Edwards AFB intentionally did not address problems related to noncompliance with micropurchase requirements and mandated sources of supply nor did it recommend corrective actions in its reports on surveillance results. For example, 40 of the 44 Edwards AFB surveillance reports that we reviewed stated “no discrepancies were noted,” even though documentation supporting this conclusion for 11 surveillance reports indicated that problems, such as splitting purchases to avoid obtaining competitive prices when purchases exceeded the $2,500 micropurchase threshold and failure to use mandated vendors, were identified. Further, while use of the surveillance checklist is optional, we found that several of the checklists provided as documented support for the surveillances that we reviewed were blank. As a result, we were unable to determine whether a review had been performed, but was not documented, or if no review had been performed as a basis for the conclusions in the associated surveillance letters. When we discussed our concerns with the Director of Contracting at Edwards AFB, the Director told us that internal reviewers do not address splitting purchases to avoid micropurchase requirements and failure to use mandated sources of supply in their surveillance reports because they consider these requirements to be “procedural matters.” However, the guidelines on micropurchases and mandated sources of supply are prescribed in law and the Federal Acquisition Regulation and are not merely procedural. It is important that surveillance reports identify noncompliance with requirements in law and regulations, which are designed to prevent improper purchases. Air Force headquarters officials agreed with our position. In addition, we found that installation program coordinators at two of our four case study locations had issued their surveillance reports to approving officials rather than unit commanders. In contrast, Nellis AFB surveillance letters were addressed to unit commanders and were signed by the Contracting Director. Further, the Nellis AFB Contracting Director wrote personal notes on the face of the surveillance letters commending unit commanders for good surveillance results and noting areas that must be improved when the surveillances had identified failure to follow purchase card guidelines. This positive “tone at the top” served to hold unit commanders accountable for effective implementation of their purchase card programs. In its August 2002 report on the results of its audit of fiscal year 2000 purchase card transactions at 46 Air Force installations, Air Force Audit Agency auditors concluded that overall, Air Force guidance established adequate purchase card controls and oversight procedures. At the same time, Air Force auditors found that installation purchase card program coordinators and approving officials did not adhere to this guidance in executing their surveillance responsibilities. For example, consistent with our audit, Air Force auditors found continuing problems with (1) advance approval for purchases of computer equipment, (2) splitting purchases into multiple transactions to circumvent micropurchase and cardholder single transaction limits, (3) accountability for pilferable property items purchased with a purchase card, and (4) purchase card statement reconciliations and approving official review. Further, according to Air Force auditors, of the nearly $150 million in transactions evaluated, cardholders acquired supplies and services totaling approximately $25.4 million using purchase methods specifically disallowed under established policy and guidance. Specifically, Air Force auditors reported the following findings related to their audits of purchase card activity during fiscal year 2000 at Edwards, Lackland, and Travis Air Force bases and purchase activity during fiscal year 2001 at Nellis AFB. At Edwards AFB, Air Force auditors had repeat findings of split purchases and lack of required advance purchase authorizations for purchases of computer equipment. The auditors also found that cardholders were not familiar with micropurchase requirements. At Lackland AFB, Air Force auditors reported that the issues of improper reconciliation procedures to validate purchases and charges and ineffective surveillance of cardholders and billing officials were repeat conditions reported in a prior audit report. Air Force auditors reported a repeat finding that Travis AFB did not always maintain property accountability for items costing over $500. In addition, the auditors reported that cardholders did not always obtain required advance authorizations for purchases of computer and communication equipment. Air Force Audit Agency auditors reported that the 99th Wing at Nellis AFB effectively controlled purchase card purchases, noting that purchases were properly approved, recorded in cardholder logs, and that approving officials reviewed cardholder records monthly. However, the auditors found that some units improperly purchased bottled water and that opportunity existed for unit commanders to ensure greater visibility over unit purchases by reviewing automated records in U.S. Bank’s database. Management plays a key role in demonstrating and maintaining an organization’s integrity and ethical values, especially in setting and maintaining the organization’s ethical tone, providing guidance for proper behavior, removing temptations for unethical behavior, and providing discipline when appropriate. GAO’s Standards for Internal Control in the Federal Government (GAO/AIMD-00-21.3.1 November 1999) During our work, we noted that misuse of the purchase card was not always subject to strong disciplinary action or consequences, even though Air Force Instruction 64-117 requires installation purchase card program coordinators to take appropriate action to document violations and preclude their reoccurrence. The Instruction also requires approving officials to document cardholder violations and forward the information to the program coordinator. The Instruction states that action(s) taken should be commensurate with the violation(s) and gives examples of actions, such as suspending the cardholder or approving official account, requiring remedial training, requiring restitution for any unauthorized purchases, and permanently revoking purchase card privileges. Holding individuals responsible for proper program execution is an integral part of a strong control environment. Our review of the results of annual purchase card surveillances performed all four case study locations determined that the surveillances found violations of Federal Acquisition Regulation guidelines on micropurchases and requirements to use mandated sources of supply. Recommended disciplinary actions for violations of these federal regulations were limited to requiring the offending approving officials and cardholders to take remedial training. For repeat offenders, the surveillance reports generally recommended that the cardholder’s and/or approving official’s purchase card account(s) be suspended until remedial training was completed. In only a few cases, had surveillance reports recommended canceling the accounts and revoking cardholder privileges, even for repeat offenders. As previously discussed, Edwards AFB surveillances did not include findings on these issues and, therefore, Edwards AFB did not take disciplinary action for failure to follow federal guidelines on micropurchases and using mandated sources of supply. Further, although three Edwards AFB surveillance reports identified improper use of the purchase card to buy food for employees, pay for party supplies, and purchase invitations for a change of command ceremony, the reports did not recommend disciplinary action, and we saw no evidence that the cardholder or the benefiting individuals were required to pay for the unauthorized purchases. According to the Edwards AFB Contracting Director, cardholders have been counseled and in some cases referrals were made to unit commanders to take appropriate disciplinary action for improper use of the purchase card. However, Edwards AFB officials provided no documentation that any disciplinary actions were taken. The Air Force Audit Agency reported that inaction by contracting officials reduced the effectiveness of other purchase controls, increased transaction errors, and undermined the Air Force purchase card goals. Air Force auditors concluded that without enforcement of purchase card requirements, such as administratively disciplining the offenders or revoking purchase card privileges, purchase card discrepancies will likely continue. We agree with the Air Force Audit Agency’s conclusions. In response to our DOD purchase card audits, the Congress has recently addressed the question of discipline for those who misuse the government purchase card. Section 8149(c) of the Department of Defense Appropriations Act, 2003 (fiscal year 2003 appropriations act), and section 1007(a) of the Bob Stump National Defense Authorization Act for Fiscal Year 2003 (fiscal year 2003 authorization act) provide that the Secretary of Defense establish guidelines and procedures for disciplinary actions to be taken against department personnel for improper, fraudulent, or abusive use of government purchase cards. Management should ensure that skill needs are continually assessed and that the organization is able to obtain a workforce that has the required skills that match those necessary to achieve organizational goals….As a part of its human capital planning, management should also consider how best to retain valuable employees, plan for their eventual succession, and ensure continuity of needed skills and abilities. GAO’s Standards for Internal Control in the Federal Government (GAO/AIMD-00-21.3.1, November 1999) Ineffective oversight of the purchase card program also contributed to weaknesses in the overall environment. Installation program coordinators are established as the pivotal officials in managing and overseeing the purchase card program. The coordinators at the installations we audited had little training on what they should be doing to oversee the program and limited time to carry out oversight activities that are called for in Air Force Instruction 64-117, such as reviewing U.S. Bank exception reports and following up on suspicious transaction activity as well as conducting annual surveillances. Effective oversight activities also would include other management reviews and evaluations to assess risks associated with the number of credit card accounts, credit limits, and approving official span of control and to assess the effectiveness of controls, identify systemic weaknesses, and determine the extent of potentially fraudulent, improper, and abusive or questionable purchases. We also found that the program coordinators did not have the grade level or organizational authority—“clout”—to routinely deal with unit commanders, approving officials, and resource managers across the installation, who may significantly outrank them, to enforce purchase card guidelines and controls. Program coordinators have the primary responsibility for purchase card program management and significant control over procurement activities carried out by a large number of individuals. For example, during fiscal year 2001, the Nellis AFB program coordinator who was a GS-9, similar in grade to a master sergeant, had responsibility for over 55,000 purchase card transactions totaling over $27 million involving 638 cardholder accounts and 99 approving officials. Installation contracting officials told us that program coordinator staffing had not kept pace with the growth in the purchase card program. GSA data show that Air Force purchase card activity has grown from 2.8 million transactions totaling $1.3 billion at the beginning of fiscal year 2000 to 3.2 million transactions totaling about $1.4 billion at the end of fiscal year 2001. During fiscal year 2002, Lackland AFB increased the number of staff assigned to the program coordinator’s office from three staff assigned in fiscal year 2001 to a total of six staff to provide better oversight of Wilford Hall’s purchase card program as well as to monitor purchase card use by units transferred to Lackland AFB when Kelly AFB closed. The other three installations we audited had not increased their program coordinator staffing. As of September 2002, Edwards AFB had three staff assigned, Nellis AFB had two staff, and Travis AFB had one and a half staff to oversee their purchase card programs. Further, at Travis AFB, where military employees have held the program coordinator position, there have been four different individuals assigned to the program coordinator position since October 2000 due to high turnover associated with military positions. As a result, Travis AFB program coordinators spend much of their time learning versus overseeing the purchase card process. Table 4 shows the grade level, staffing, and span of control data for installation purchase card program coordinators at our four case study locations as of August 2002. In September 2002, the Travis AFB Contracting Director told us that he had requested approval to hire a full-time GS-9 staff member to assist the program coordinator. In early November, the Deputy Contracting Director told us that they had received approval hire a GS-11 civilian employee as the installation purchase card program coordinator as well as the GS-9 assistant, beginning in January 2003. Given the risks associated with ineffective purchase card program management identified in our purchase card work, it is imperative that agencies have sufficient numbers of qualified, experienced purchase card program coordinator staff to help oversee their purchase card programs and that their grade levels be commensurate with their responsibilities. Internal control activities help ensure that management’s directives are carried out. The control activities should be effective and efficient in accomplishing the agency’s control objectives. GAO’s Standards for Internal Control in the Federal Government (GAO/AIMD- 00-21.3.1, November 1999) Our tests of statistical samples of transactions at four Air Force installations found weaknesses in key purchase card control activities. For example, of the five key control activities we tested, we found that all four Air Force locations had significant control breakdowns in at least three of them. However, on a positive note, our tests of easily pilferable property items included in our statistical sample transactions showed that two of the four Air Force installations we audited were able to account for all the property items that we selected for testing. Control activities occur at all levels and functions of an agency. They include a wide range of diverse activities such as approvals, authorizations, verifications, reconciliations, performance reviews, and the production of records and documentation. For the Air Force purchase card program, we tested those control activities that we considered to be key in creating a system to provide reasonable assurance that transactions are correct and proper throughout the procurement process. The key control activities and techniques we tested include (1) advance approval of purchases, (2) independent receiving and acceptance of goods and services, (3) cardholder reconciliation of monthly statements, (4) independent review by an approving official of the cardholder’s reconciled statements and supporting documentation within the Air Force-prescribed time frame, and (5) cardholders obtaining receipts and maintaining invoices that support their purchases and provide the basis for reconciling cardholder statements. Table 5 summarizes the results of our statistical testing. Appendix I includes the specific criteria that we used to conclude on the effectiveness of these controls. The results in table 5 include transactions for which supporting documentation was destroyed due to improper records retention guidance in the Air Force Instruction. Our test work showed that, for the most part, the four case study locations we tested had documentation of required advance authorization with estimated failure rates for required advance authorization of purchases ranging from 2 percent to 12 percent. The segregation of duties between officials who authorize a purchase and the cardholder who makes the purchase helps reduce the risk of fraud, waste, and abuse in the purchase card program. The Air Force purchase card program Instruction 64-117 requires cardholders to obtain authorization from the specified controlling/servicing organization on base for certain purchases, including purchases of computer and communication equipment, video equipment, medical items, and hazardous materials, before making the purchase. The Air Force installations we audited documented advance authorizations on forms established specifically for that purpose. The requirement for documentation of independent receiving and acceptance by someone other than the cardholder is not specifically addressed in DOD policy or Air Force purchase card program Instruction 64-117. We believe that independent documentation of receipt of items purchased by a cardholder is a basic internal control activity that provides additional assurance to the government that purchased items are not acquired for personal use and that they come into the possession of the government. Based on our statistical testing, we estimated that the failure rate for independent documentation of receipt and acceptance—receiving of goods and services by someone other than the cardholder—ranged from 53 percent to 68 percent at the four Air Force locations we tested. The types of items in our sampled transactions that lacked independent evidence of receipt and acceptance included computer software and memory cards, a fax machine, a cassette recorder, camera film, hardware, supplies, and tools. These items were purchased at stores such as Homebase, Staples, and Radio Shack. Because the Air Force purchases items for valid, government purposes from stores that are widely used by consumers to acquire items for personal use, verification of receipt of goods and services by an individual other than the cardholder is necessary to reduce the risk of fraudulent transactions. Cardholder reconciliation is a key control activity for detecting invalid transactions, including billing errors and unauthorized purchases. However, based on our statistical testing, we estimated that cardholders at the four case study locations lacked documented evidence of timely reconciliations of monthly purchase card statements from 21 percent to 37 percent of the time. As evidence that purchase card statements were reconciled, we accepted check marks, notes, sequential numbering, and numbering systems that tied transactions on the statement to items on the cardholders’ purchase card logs. Independent approving official review of monthly, reconciled cardholder statements is a key control activity for segregation of duties, whereby no one individual has control over all aspects of a transaction. Based on our statistical testing, we estimated that approving officials at the four case study locations lacked documented evidence that they had reviewed monthly, reconciled purchase card statements, or had reviewed them within required time frames, from 69 percent to 87 percent of the time. Our statistical tests of approving official review considered only documented review of statements with evidence of reconciliation. The high failure rates are due, in part, to approving officials’ failure to date the reconciled monthly statements when they reviewed them. The failure rate also may be attributable to approving official duties falling into the category of “other duties as assigned” and the span of control issues discussed earlier. The high failure rate for approving official review is of particular concern because the Air Force uses a “pay and confirm” policy, which is inconsistent with governmentwide and DOD guidelines on reconciliation and payment of purchase card bills. In a letter dated April 30, 2002, DOD informed us that its reengineering memorandums and other pronouncements are in compliance with 10 U.S.C. 2784, which requires the Secretary of Defense to issue regulations that require, among other things, reconciliation of purchase card statements to receipts before the statements are forwarded to the disbursing office. Both section 4535 of volume 1 of the Treasury Financial Manual and DOD’s Purchase Card Reengineering Implementation Memorandum #3 (change 1, June 30, 1998) require that purchase card statements be reconciled and forwarded for payment in a timely manner and allow “pay and confirm” only with respect to verification of government receipt of the purchased items or services. In contrast, Air Force purchase card policy permits cardholder statements to be reconciled and approved after payment has been made. While a conscientious postpayment reconciliation and approval process may provide reasonable control, the lack of documented evidence of postpayment reconciliation and approval and the undisputed, potentially fraudulent transactions identified in our work underscore concerns about noncompliance with the law. In July 2002, Air Force management asked its contractor, U.S. Bank, to “shut down” (suspend from use) over 4,000 unreconciled, unapproved cardholder accounts until the reconciliations were completed and the approving officials had reviewed them. Accounts that had not been reconciled as of the end of August 2002 were canceled. According to an Air Force headquarters official, these accounts would need to be manually reconciled because they are no longer active in U.S. Bank’s system. Under Air Force “pay and confirm” procedures, the installation Financial Services Office designates a certifying officer to verify availability of funding and certify the monthly installation purchase card invoices for payment prior to receipt of the confirmation of reconciled statements from the approving official. Monthly invoices are to be paid in full and are not to be adjusted for disputed items. Instead, cardholders and approving officials are to resolve any irregularities through a separate dispute process. The Air Force Instruction requires that approving officials review and approve reconciled cardholder statements and submit the confirmed statements to the Financial Services Office within 15 days of receipt of the monthly statement, but no later than the 15th day of the following month— commonly referred to as the 15-day rule. The Financial Services Office files the confirmed statements with a copy of the previously certified statement. The pay and confirm process has yielded benefits, such as increased rebate earnings, and has almost eliminated late payment interest. However, without effective controls over cardholder reconciliation and approving official review, the pay and confirm process increases the risk that fraudulent, improper, and wasteful purchase card expenditures could occur and go undetected. DOD and Air Force officials told us that they were concerned about the lack of compliance with requirements for purchase card statement reconciliation and approval. DOD and U.S. Bank officials told us that this control was implemented upon receipt of the April 25, 2002, monthly purchase card statements. The officials told us that on July 1, 2002, approximately 4,000 cardholder accounts were suspended and no further charges could be made to these accounts until they were reconciled, and reviewed and approved by the approving officials. The officials also told us that reconciliation of these accounts resulted in a high volume of disputed transactions as cardholders began to reconcile their statements and approving officials had to review and approve them, indicating that fraudulent or erroneous transactions may have occurred and had not been previously detected. On August 22, 2002, DOD’s Purchase Card Joint Program Management Office Director told us that 149 of the purchase card accounts that were suspended on July 1, 2002, had not been reviewed and confirmed by the approving officials and, as a result, the accounts had not been reactivated. The DOD Director said that he planned to cancel these accounts because they apparently are not needed. However, the failure to reconcile these accounts raises questions about whether cardholders and/or approving officials may have made fraudulent or improper transactions for which they want to avoid scrutiny. Without reconciliation and independent review, DOD and the Air Force have no assurance that such purchase card activity did not involve fraudulent or improper transactions. As shown in table 5, three of the Air Force case study locations we audited—Edwards, Nellis, and Travis Air Force bases—maintained the vast majority of the receipts for items purchased with the government purchase card. For example, our statistical test results showed the estimated failure rates for this control activity ranged from 0 to about 9 percent across these three locations. Our statistical test results for Lackland AFB showed that this location had an estimated 30 percent failure rate for this control activity. Of the 37 Lackland AFB transactions that were missing receipts, 23 related to Wilford Hall Medical Center transactions. Another three of the transactions in our Lackland AFB sample related to a security forces unit that transferred to another Air Force installation and did not retain purchase card documentation. In testing for evidence of a receipt, we accepted either the original or a copy of the invoice, sales slip, or other store receipt. GAO’s Internal Control Standards state, “all transactions and other significant events need to be clearly documented, and the documentation should be readily available for examination. All documentation and records should be properly managed and maintained.” Without supporting sales receipts or invoices, it is not possible to tell the quantity and type of items purchased or whether those items were for government business or of a personal nature. In such cases, a thorough investigation would be needed to determine whether a transaction was proper, or if it represented a potentially fraudulent, improper, or abusive transaction needing corrective action. Further, without a receipt, two other key control activities—independent receipt and acceptance and approving official review—become ineffective. Independent receiving cannot confirm that the purchased items were received and the approving official cannot review a cardholder statement reconciled with the supporting receipt. A near zero failure rate is a reasonable goal considering that receipts are easily obtained or replaced when inadvertently lost. As shown in table 6, three of the four installations we audited recorded most of the items we selected for testing in their accountable property records. In addition, the Air Force was able to locate and we confirmed that all of the items that were not recorded at two installations were in the possession of the government. However, Air Force officials were unable to locate 4 of the 114 accountable property items we tested at Edwards AFB and 14 of the 70 accountable items we tested at Travis AFB, indicating that these items may have been lost or stolen. The property book officer at Travis AFB told us that cardholders do not always notify the property office and provide documentation of accountable items purchased with the government credit card—even though many of them are easily pilferable and desirable items. As previously discussed, Travis AFB’s failure to record in the installation’s property records, accountable property purchased using a government purchase card was also an Air Force Audit Agency repeat audit finding. Items such as a digital camera, a laser printer, and computers and monitors were not included in base property records. The Edwards AFB property items that could not be located included two computer servers and two monitors costing a total of $11,258 that were ordered for other installations. The Travis AFB property items that could not be located included a digital camera costing $812 that, according to investigative records, was previously reported stolen from an employee’s office and eight computers costing $14,128 that were allegedly sent to the Defense Reutilization Marketing Service (DRMS) as excess items during the year they were purchased. Because serial numbers for the computers had not been recorded, we could not confirm that the eight computers we selected for testing were items that were sent to DRMS. A Travis AFB contracting official told us that these new computers should not have been sent to DRMS as excess property. The official told us that unneeded computer equipment is required to be turned in to the information technology unit for assignment to other installation units. Other missing items included a laptop computer and four computer monitors costing under $500. GAO’s internal control standards and DOD Instruction 5000.64, Defense Property Accountability, require that accountable property be recorded in property records as it is acquired. The DOD Instruction refers to accountable property as “controlled inventory items” and defines these items as those designated as having characteristics that require them to be identified, accounted for, secured, segregated, or handled in a special manner to ensure their safekeeping and integrity. The Instruction defines pilferable items as those that have a ready resale value or application to personal possession and that are, therefore, especially subject to theft. However, the DOD Instruction does not include a list of items that fall into these categories. Accountable property generally includes high-cost property items and easily pilferable or sensitive items, such as computers and related equipment, cameras, cell phones, and power tools. Air Force Instruction 33-112, Computer Systems Management, requires mandatory inclusion of computer items costing $500 or more in inventory records. However, the Air Force does not have a policy for recording other types of easily pilferable or sensitive items in its property records. According to an Air Force headquarters acquisition official, decisions on how to control items, such as computers and related equipment, cameras, cell phones, and power tools, are left to the discretion of the installation commanders. As a result, there is no assurance that Air Force installations are following DOD policy. At the four installations we audited, we found that most computers were recorded in either central or unit-level property systems. However, cardholders at the installations we audited were not always aware that items such as digital cameras, fax machines, or computer items costing less than $500 meet the definition of controlled inventory items and/or pilferable items and thus should be recorded in their property records. One factor that may explain the positive Air Force test results at three of the four installations we audited is that these installations made greater use of centralized purchasing and receiving for computer equipment. As a result, contracting and information technology units controlled purchasing and receiving for these items and assured that the items were recorded in the property records when they were received. In contrast, at Travis AFB, nearly half of the property items tested were not recorded in property records and management could not locate many of these items. The use of central purchasing and receiving helps to mitigate against control breakdowns where cardholders do not take action to ensure that accountable items are recorded in property records. We identified numerous purchase card transactions at the four installations we audited and in our Air Force-wide data mining that were potentially fraudulent, improper, and abusive or questionable. Buying items with purchase cards without the requisite control environment and key control activities in place creates unnecessary risk of fraud and abusive and wasteful spending. Also, the lack of records previously discussed raises concerns about whether files could have been destroyed so that potentially fraudulent, improper, or abusive transactions were not documented and subjected to review. In addition, we saw a number of potentially fraudulent transactions at the case study locations we audited and in our Air Force- wide analysis that related to compromised accounts. We did not review all potentially fraudulent, improper, and abusive transactions identified in our work. As discussed in appendix I, our work was not designed to identify, and we cannot determine, the extent of potentially fraudulent, improper, and abusive or otherwise questionable transactions. We identified transactions that Air Force officials acknowledged to be fraudulent, as well as potentially fraudulent transactions for which no supporting documentation was available, at all four installations we audited, as well as in our Air Force-wide analysis. Some transactions identified as potentially fraudulent resulted from compromised accounts in which a purchase card or account number was stolen and used by someone other than the cardholder to make unauthorized purchases. We found five potentially fraudulent transactions involving two purchase card accounts that were not disputed with the bank. Further, none of these transactions were referred to Air Force investigators until we questioned them. We considered potentially fraudulent purchases to include those made by cardholders that were unauthorized and intended for personal use. We also considered transactions for which there was no supporting documentation to be potentially fraudulent because, in the absence of supporting documentation, it is not possible to determine whether these transactions represented valid government purchases, fraudulent transactions that went undetected, or fraudulent transactions for which documentation was intentionally destroyed to cover up the fraud. In these instances, cardholders and approving officials were unable to tell us the types of items purchased or the purpose of the transactions. However, we determined that they had not disputed any of these transactions. Potentially fraudulent transactions can also involve vendors charging purchase cards for items that cardholders did not buy. Although collusion can circumvent what otherwise might be effective internal control activities, a robust system of guidance, internal control activities, and oversight can create a control environment that provides reasonable assurance of preventing or quickly detecting fraud, including collusion. Air Force and U.S. Bank officials told us that in July 2001, U.S. Bank identified numerous fraudulent transactions due to compromised accounts. According to U.S. Bank officials, this was a widespread fraud involving many credit card banks that was apparently related to a fraud ring that used a computer to randomly generate credit card account numbers and/or counterfeit credit cards, which they then used. U.S. Bank officials told us that the compromised accounts initially were believed to be associated with transactions in a few states, including California and Georgia. However, the fraud was subsequently determined to be a nationwide problem. According to Air Force and U.S. Bank officials, numerous Air Force purchase card accounts were canceled due to this fraud. Given the risk associated with such fraud, it is extremely important that cardholders reconcile their monthly statements in order to detect and dispute potentially fraudulent transactions. Table 7 illustrates the types of potentially fraudulent transactions that we identified. Our Air Force-wide data mining and analysis of purchase card documentation identified potentially fraudulent purchase card transactions at Andrews and McConnell Air Force bases that were never disputed with the bank or credited to the government. Our analysis of documentation related to the potentially fraudulent transactions that were not disputed disclosed the following facts. The E-Z Pawn transaction was for a $2,443 down payment on a $10,000 sapphire ring. An October 4, 2001, agency program coordinator purchase card surveillance report indicated that the reviewer had questioned the lack of receipt for the E-Z Pawn transaction; however, the surveillance did not question the propriety of this transaction. No further action was taken, even though pawnshops are coded to a merchant category that is required to be blocked as a means of preventing fraudulent transactions from being processed. In response to our inquiry, the program coordinator told us that there is no evidence that either the cardholder or the approving official disputed the transaction. Also, we found no credit for this transaction in the Air Force purchase card database from U.S. Bank. According to the cardholder, at the time the potentially fraudulent transaction occurred, he did not have possession of the purchase card. The cardholder told us that his purchase card account was being closed due to outsourcing of his unit’s function, and he had turned over his purchase card to another individual. The cardholder stated that, as a result, he never received his monthly statement and thus did not perform a reconciliation or identify the potentially fraudulent transaction. We determined that the program coordinator took no further action to ensure that the potentially fraudulent transaction was disputed. The Air Force headquarters acquisition official who obtained the supporting documentation on these transactions for our review and analysis told us that she referred these transactions to the Air Force Office of Special Investigations and our investigators confirmed that Air Force investigators had opened a case to investigate these frauds. The four potentially fraudulent McConnell AFB transactions totaling $3,232 were made on July 31, 2001. These potentially fraudulent transactions included charges made at San Diego area stores, including $690 at a Ross Store, $873 at Old Navy, $689 at K-Mart, and $980 at Target. These charges were all made when the purchase card was in the possession of the approving official while the cardholder was assigned to Biloxi, Mississippi, for over 2 months for noncommissioned officer training. Although the approving official stated that his review of the cardholder’s August 2001 statement detected the potentially fraudulent transactions, he did not dispute these transactions. The program coordinator told us that disputing erroneous charges is covered repeatedly in cardholder and approving official training. The program coordinator also told us that she counseled the approving official extensively on cardholder and approving official responsibilities. The program coordinator told us that she informed the approving official that she had reduced the credit limit on the account to $1 to avoid further charges. She also said that she instructed the approving official that he should dispute these transactions in the absence of the cardholder’s dispute. However, instead of disputing these transactions, the approving official waited for the cardholder to return from training in mid-September and asked the cardholder to seek guidance from the program coordinator. A circular discussion ensued about who would take action to dispute the potentially fraudulent transactions, with the approving official stating that the cardholder was to dispute these transactions and the cardholder stating that he thought the program coordinator would dispute the transactions. As a result, the transactions were never disputed and the program coordinator took no further action. The Air Force headquarters acquisition official who obtained the supporting documentation on these transactions for our review and analysis told us that she referred these transactions to the Air Force Office of Special Investigation. However, Air Force investigators told us they did not initiate an investigation because they believed that the transactions had been credited. Our review of the Air Force purchase card database and discussions with the installation program coordinator determined that the transaction had not been credited and the potential fraud is unresolved. Therefore, we have referred this matter to our investigators for further investigation. In contrast to the failure to dispute the potentially fraudulent transactions discussed above, the following are examples of potentially fraudulent transactions that Air Force cardholders detected and disputed. A $19,000 charge for a duplicator from BEF Corporation—an imaging technology vendor in Allentown, Pennsylvania—related to a compromised purchase card account. Our investigators confirmed that the Edwards AFB cardholder called the bank and disputed the transaction before the item was delivered. As a result, neither the government nor the bank incurred a loss related to this compromised account. Our investigators confirmed that the cardholder had referred the transaction to Air Force investigators. Three unauthorized Travis AFB purchases, including one transaction at Flowers Sent Today, Inc., and two transactions at Flowers Anytime, totaling $432 were made by an inmate at a local county jail. Travis AFB officials told us that they disputed these transactions because they were unauthorized and they believed that the account had been compromised. These potentially fraudulent transactions were subsequently investigated by U.S. Bank and the purchase card account was canceled. Five potentially fraudulent charges were made to a Scott AFB, Illinois, cardholder’s account on August 21 and August 22, 2001, totaling about $2,400. The charges were made at vendors in the Miami area, including charges at Ocean Drive Fashions, Inc., for $426, Rendezvous on the Beach for $224, Donald Pliner Concept Store for $520, Sunglass Hut for $586, and Watch World for $645. Our review of Air Force records showed that the cardholder disputed these transactions and a U.S. Bank investigation of the potentially fraudulent transactions was initiated on November 9, 2001. Our review of U.S. Bank records showed that the purchase card account was credited for the fraudulent transactions on June 6, 2002—nearly a year after the transactions were made. A Patrick AFB cardholder identified unauthorized purchase card transactions totaling $249 that were made at Citgo 7-Eleven, Publix, Chevron, Kash N Karry, and Speed SM by his wife between April 1 and 6, 2001. According to the unit commander, the cardholder’s purchase card privileges were revoked, and the cardholder agreed to pay for his wife’s unauthorized charges. However, as of the end of October 2002, we determined that the cardholder had not reimbursed the government for his wife’s unauthorized use of the purchase card. We suggested that the employee should submit a check to the installation’s Financial Services Office along with an explanation for the payment. We also contacted the Air Force Office of Special Investigations to inquire about their purchase card fraud cases. Investigators told us that their investigative database did not contain codes that permit them to identify purchase card fraud cases. As a result, the investigators had to manually review all procurement-related cases to attempt to identify cases involving purchase card fraud. Appendix III summarizes some of the purchase card fraud cases that were investigated by the Air Force Office of Special Investigations. Besides potentially fraudulent activity, our work also identified numerous examples of transactions related to improper purchases, as well as improper use of the purchase card. Improper purchases are those purchases that, although approved by Air Force officials and justified as intended for government use, are not permitted by law or regulation or DOD or Air Force policy. Improper use of the purchase card related to use of the card as an acquisition tool without a negotiated contract, use of the card by a nongovernment activity, and improper use of convenience checks associated with purchase card accounts. We identified the following three types of improper purchases. Purchases that did not serve a legitimate government purpose. Split purchases in which the cardholder circumvents the micropurchase limit or other transaction limits. Purchases from improper sources. Various federal laws and regulations require procurement officials to acquire certain products from designated sources, such as Javits-Wagner-O’Day Act (JWOD) vendors. In addition, agencies are required to purchase furniture, if available, from Federal Prison Industries, Inc. (UNICOR), and DOD policy requires that printing services be obtained in-house through the Defense Automated Printing Service. We found several instances of purchases, such as clothing, luggage, and food, in which cardholders improperly used their purchase cards, or they purchased goods that were not authorized by law or regulation. The Federal Acquisition Regulation, 48 C.F.R. 13.301(a), provides that the governmentwide commercial purchase card may be used only for purchases that are otherwise authorized by law or regulation. We identified the improper Air Force transactions as part of our analysis of questionable vendor transactions at the four installations we audited and in our Air Force-wide data mining. Table 8 summarizes examples of the improper transactions we identified that do not serve a legitimate government purpose. The following examples illustrate the types of purchases included in table 8. Clothing and sunglasses. We identified numerous purchases of clothing for military personnel that appeared to be personal preference items. We determined that these purchases were improper based on our review of DOD directives, Air Force policies, and discussions with Air Force headquarters officials. For example, at Lackland AFB, one of our test locations, we identified purchases of clothing for drill instructors, including physical fitness clothing items from LL Bean costing $816 and 16 fleece jackets from Oakley costing $880. We also identified numerous purchases of military clothing items from REI totaling $20,698, including paratrooper jumpsuits and cold weather pilot jackets. These clothing items are covered under the military pay clothing allowance funded in the Military Personnel, Air Force, appropriation and should not have been purchased with Operation and Maintenance appropriations using the purchase card. In addition, we found purchases of what we consider to be personal clothing and accessory items that were authorized by installation officials. For example, Hanscom AFB, Massachusetts, authorized the purchase of a blue blazer from Filene’s costing $180 and eight dresses from Dress Barn costing $648 for civilian employees who participated in the Eubank Service Award Competition. Hickam AFB, in Hawaii, purchased two wool coats from Hecht’s in Waldorf, Maryland, costing $380 for a flight attendant assigned to the 65th Airlift Squadron. In addition, the Lackland AFB pararescue team purchased 12 pairs of sunglasses from Oakley costing $540 and improperly justified them as meeting the requirement for free-fall paratrooper goggles. Oakley sunglasses do not qualify as paratrooper goggles. None of these items are authorized by Air Force policy. Therefore, we concluded that all of these purchases involved personal items, which employees should pay for from their own salaries. Luggage and briefcases. We identified numerous purchases of luggage deemed necessary for employees who travel frequently, including 50 Samsonite suitcases costing $5,500 for members of the Thunderbirds team and garment bags costing $2,250 for the Bolling AFB band, and purchases of two Pathfinder Pullman suitcases costing $620 for recruiters. In addition, our review of a limited selection of Franklin Covey transactions identified six purchases of leather briefcases costing $212 each (after a 20 percent discount) and purchases of 203 flight bags (similar to a briefcase) costing from about $50 to $86 each. Luggage and briefcases are considered personal items that should be paid for from employees’ salaries. Food and water for employees. We identified purchases of meals, bottled water, and payment for a unit luncheon. Without statutory authority, appropriated funds may not be used to furnish meals to employees within their normal duty stations. We identified improper charges for food provided to employees during an internal government meeting and unit luncheons. For example, Lackland AFB scheduled a strategic planning meeting for local employees of the Tri-Care organization at a Marriott Hotel in San Antonio, Texas. The total cost of the meeting was $1,052, including $538 for breakfast and lunch for 18 individuals—3 consultants and 15 employees who were not on travel. In addition, appropriated funds may not be used to purchase bottled water for employees unless they are assigned to a duty station without potable water. Food and bottled water are considered personal items that employees should pay for from their own salaries. Another category of improper transactions is a split purchase, which occurs when a cardholder splits a transaction into more than one segment to circumvent the requirement to obtain competitive prices for purchases over the $2,500 micropurchase threshold or to avoid the other established credit limits. The Federal Acquisition Regulation and Air Force Instruction 64-117 prohibit these practices. Once items exceed the $2,500 threshold, they are to be purchased in accordance with simplified acquisition procedures, which are more stringent than those for micropurchases. Our analysis of purchases made at the four case study locations and our Air Force-wide data mining identified numerous split purchases. In addition, Air Force Audit Agency auditors identified split purchases as a continuing Air Force-wide problem. One split purchase we identified involved a violation of appropriations law. The purchase was made to avoid the expiration of unused fiscal year 2001 operation and maintenance appropriations. Our inquiries disclosed the following events surrounding this transaction. On September 30, 2001, at approximately 9:00 p.m. Eastern Standard Time, Air Combat Command headquarters at Langley AFB, Virginia, determined that $100,000 in fiscal year 2001 operation and maintenance appropriations would expire in 3 hours—at midnight—unless the funds could be obligated and spent quickly. At 6:00 p.m. Pacific Standard Time, the command contacted the USAF Weapons School, 57th Operations Support Squadron at Nellis AFB, Las Vegas, Nevada, with the direction to use the funds before they expired. At approximately 7:00 p.m., the Nellis AFB cardholder purchased 120 helmets at more than $800 each and several other items totaling just under $100,000 from the base supply store. The purchase was split into four separate transactions to stay within the cardholder’s single transaction limit of $25,000 per transaction. The items purchased were placed “on hold” and were not taken from the store. Over the next few days—October 1 and October 2—the cardholder “returned” the items, exchanging them for other items totaling the same amounts. In an explanatory memorandum, the cardholder wrote that the unit had not previously identified its unfilled requirements and, therefore, did not have a list of items for purchase if “end-of-year” money was available. The subsequent credits and reuse of the funds in early October 2001, in effect, converted fiscal year 2001 appropriations to fiscal year 2002 budget authority. Another type of improper purchase occurs when cardholders do not buy from a mandatory procurement source. Various federal laws and regulations, such as the Javits-Wagner-O’Day Act (JWOD), require government cardholders to acquire certain products from designated sources. The JWOD program is a mandatory source of supply for all federal entities. It generates jobs and training for Americans who are blind or have other severe disabilities by requiring federal agencies to purchase supplies and services furnished by nonprofit agencies, such as the National Industries for the Blind and the National Institute for the Severely Handicapped. Most JWOD program supplies are small-value items such as office supplies, cleaning products, or medical/surgical supplies that nearly always fall into the micropurchase category. We noted that most cardholders at the four installations we audited made purchases from required sources and three of the four installations had JWOD stores on base. However, we found numerous Air Force-wide purchases from Franklin Covey for office supplies, such as calendars and day planners, which could have been purchased from JWOD vendors. We also found that Air Force cardholders charged $2,220 for 82 high-quality pens from Franklin Covey costing from $16 to $60 each. Table 9 summarizes transactions we identified that were made from other than required sources of supply. The failure to purchase designated items from JWOD vendors who support the handicapped undermines public policy objectives to support programs for the handicapped. For example, as discussed in our Army purchase card report, the Director of Sales for the National Industries for the Blind told us that this program has experienced large decreases in sales over the past 2 years because cardholders were purchasing from commercial firms rather than buying the mandated products. Further, operating revenues of government service organizations, such as the Defense Automated Printing Service (DAPS), which is a required source of printing services for DOD agencies, and Federal Prison Industries, Inc. (UNICOR), which is a required source for furniture, are significantly reduced to the extent that cardholders do not use these sources for mandated printing and related services. In addition to the improper transactions discussed above, our analysis of Air Force-wide purchase card transactions identified the following three types of improper use of the purchase card. Use of the purchase card as acquisition tool where a negotiated contract Purchase card use by a religious fund activity without authority in law, regulation, or DOD or Air Force policy, and Improper use of convenience checks billed to the purchase card account. The following examples illustrate the types of purchases included in table 10. Month-to-Month Equipment Rental. We found that a cardholder at Whiteman AFB, Missouri, improperly used the purchase card as an acquisition vehicle without a negotiated contract. Specifically, the cardholder used the purchase card to rent tractors for use by the installation’s waste treatment facility at a cost of about $10,000 during fiscal year 2001. Our analysis of records related to this lease showed that an initial month-to-month rental of a tractor covered 3 years beginning in November 1997. In November 2000, the cardholder initiated monthly rental of a new tractor for 2 years. According to the installation program coordinator, the current cost to purchase the type of tractor that was rented during fiscal years 2000 and 2001 would be in the $35,000 to $45,000 range; however, a lease versus purchase cost-benefit analysis was never performed. We determined that the cost of the two rentals, which together covered a 5-year period, totaled over $50,000. Unauthorized Use of the Card by Chaplain’s Religious Fund. Our analysis of fiscal year 2001 FE Warren AFB, Wyoming, purchase card transactions totaling over $6,600 by a Chaplain Office volunteer, who later became a contractor, raised a number of questions about the proper use of the purchase card. Air Force purchase card policies state that commercial purchase cards are provided to military members and federal civilian employees to pay for official government purchases, and that only employees may be cardholders. According to the Chaplain Office officials, these transactions were for authorized purchases. They said they often use parishioners and volunteers to assist them in carrying out religious activities. However, Air Force purchase card policies state that commercial purchase cards are provided to military members and federal civilian employees to pay for official government purchases. Chaplain Office officials told us that because the Chaplain Religious Fund did not fall under Air Force purchase card authority as either an appropriated fund or a nonappropriated fund activity, they believed they could set up the government purchase card program for their office by working independently with the bank. We discussed our concerns about Chaplain Office authority to use the government purchase card with Air Force attorneys and acquisition and Chaplain Office officials. After reviewing Air Force policy, Air Force attorneys advised us that the Chaplain’s Office did not have authority to use the government purchase card for Chaplain Religious Fund activities. Convenience Checks. We identified improper use of convenience checks related to payments in amounts over $2,500, payments for recurring services, and payments to vendors who accept purchase card payments. Air Force Instruction 64-117 limits the use of convenience checks to amounts of no more than $2,500 per check, prohibits the use of convenience checks for recurring services, and restricts convenience check use to instances where vendors do not accept purchase cards. Splitting amounts across more than one check to keep below the $2,500 limit also is prohibited. Another area of improper use of convenience checks related to reimbursement of employees for tuition assistance. We determined that reimbursement to employees is not a permitted use of convenience checks. Further, because there is a 1.7 percent fee for using a convenience check, cost-benefit considerations are required when using convenience checks. Our analysis of fiscal year 2001 convenience check use determined that Air Force purchase cardholders who had convenience check authority had issued 45 convenience checks totaling over $200,000 for amounts over $2,500. We also found that a cardholder at Luke AFB, Arizona, improperly used convenience checks for recurring monthly payments on a 2-year automobile lease for authorized use by a military officer. It is common knowledge that car dealerships accept credit cards. At one of our case study locations—Travis AFB—we found that a cardholder had reimbursed employees for $12,214 in tuition expenses. The cardholder wrote two convenience checks—one check for $2,090 and another for $500—to reimburse an employee for a total of $2,590 in tuition expenses. The cardholder wrote two additional convenience checks—one check for $6,125 and another for $3,500—to reimburse two other employees for $9,614 in tuition expenses. The 1.7 percent fee on the first two checks was $44 and the fee on the second two checks was about $163—significantly more that the Defense Finance and Accounting Service fee of approximately $7 to process electronic payments. In its August 2002 purchase card report, the Air Force Audit Agency stated that its review found that cardholders issued convenience checks to pay salaries and wages totaling $512,378 for dieticians, nurses, and administrative personnel, and to acquire recurring services, such as aircraft washing totaling $84,830, local area network support costing $110,495, equipment rentals costing $20,700, and lawn-care services costing $43,505. Air Force auditors determined that cardholders expended $2.6 million for recurring services and incurred unnecessary bank fees of $15,228 associated with the 1.7 percent service fee. We also identified abusive and questionable transactions at installations we audited and in our Air Force-wide data mining. We defined abusive transactions as those that were authorized, but the items purchased were at an excessive cost (e.g., “gold plated”) or for a questionable government need, or both. Abuse occurs when the conduct of a government organization, program, activity, or function falls short of societal expectations of prudent behavior. Often, improper purchases, such as those discussed in the previous section, are also abusive. For example, the purchases of personal clothing and luggage for employees were also abusive purchases because they were for a questionable government need. Questionable transactions are those that appear to be improper or abusive but for which there is insufficient documentation to conclude either. We deemed questionable those purchases for which there was not a reasonable and/or documented justification. Questionable purchases often do not easily fit within generic governmentwide guidelines on purchases that are acceptable for the purchase card program. They tend to raise questions about their reasonableness. Many, such as gym-quality exercise equipment for fitness centers, are common Air Force—and DOD—purchases because the Air Force must provide more than merely a work environment for its soldiers. However, others involving excessive purchases of alcohol, payment for taxidermy services, and purchases of expensive leather computer cases discussed in this section, raise questions about whether they are appropriate purchases. Precisely because these types of purchases tend to raise questions and subject the Air Force to criticism, they require a higher level of advance purchase review and documentation than other purchases. When we examined these types of purchases, we usually did not find evidence of advance purchase justification. In attempting to justify whether purchases were acceptable, improper, or abusive, program coordinators, approving officials, and cardholders often provided after-the- fact rationales for the purchases. We believe that these types of questionable purchases require scrutiny before the purchase, not after. The examples in table 11 illustrate our point. The following examples illustrate the problems associated with some of the transactions we identified as abusive or questionable. At Nellis AFB, the purchase card was used to pay for dinner and a show for 18 people costing $2,141 at Treasure Island—a Las Vegas hotel and casino. The purpose of the event was to entertain the General of U.S. Joint Forces Command, who was visiting the base. Air Force Instruction 65-603, Official Representation Funds – Guidance and Procedures, permits the use of government funds for official entertainment. However, we determined that the nature of this entertainment did not meet certain Force guidelines related to requirements to conduct entertainment on a modest basis that is in the interest of the taxpayer. For example, we determined that the cost of the dinner party, which totaled $2,141, included about $800 for alcohol for the 18 people who attended the event—over $40 per person. We believe the excessive cost of alcohol purchased at this event falls short of societal expectations of prudent behavior and modest cost. An Air Force Academy Natural Resource office in Colorado Springs, Colorado, used the purchase card to pay Timberline Taxidermy $375 to prepare a shoulder mount of a mule deer head. According to the approving official, the deer was “road kill” that he found on the roadside and brought to the Natural Resources Office. The approving official then approved the purchase of the taxidermy service to prepare a stuffed shoulder mount of the deer. The deer head was hung on the wall in the Natural Resources Office. The justification of the purchase of taxidermy services provided to our auditors stated, "The mule deer (Odocoileus hemionus) is the most common large mammal present on US Air Force Academy grounds. The mount was created as an educational/interpretive tool and is on display in the USAFA Natural Resources office. The mount can be removed from the office wall for use in educational presentations to the base population (e.g., that only males have antlers which are shed and re-grown each year.) The deer died after being struck by a vehicle and was salvaged by USAFA Natural Resources personnel." When our auditor asked the cardholder how often the deer head was removed from the office wall and used for educational purposes, the cardholder stated, "not much." The cardholder, the approving official, and two other Natural Resource employees occupy the office where the deer head currently hangs. The Edwards AFB, 412th Test Wing/Electronic Warfare used fiscal 2001 year-end funds to purchase 21 computers and monitors at a total cost of $47,372. An e-mail message dated September 13, 2001, within the 412th Test Wing had subject lines stating “Last Minute Purchasing” and included one message line that stated, “… I gather they have quite a bit of unspent credit card money and want to move out on it this week.” The 21 computers and monitors were received on October 23, 2001. At the time of our inspection of these items in June 2002, 11 of the computers and 9 of the monitors were located in a storage area and many of the items were still in the original shipping boxes. According to the Contracting Director, the computers were purchased to support hiring of engineers and engineering contractors. However, recruiting efforts were delayed because some applicants did not accept offered positions and further recruiting efforts were suspended during implementation of a new personnel system in the fall of 2001. The Contracting Director told us that the unit commander directed that the computers be stored in their original boxes for safekeeping until recruiting was completed and the equipment could be assigned to engineering staff, which was accomplished in July or August 2002. The fact that the computers were still in their original boxes at least 9 months after they were ordered raises questions about whether there was a legitimate need for these items to be purchased with fiscal year 2001 appropriations. Appropriated funds are available only to meet legitimate needs of the agency during the fiscal year for which the funds were appropriated. Our testing of property items identified two Travis AFB transactions dated October 4, 2000, for purchases of computer equipment totaling $14,128 that involved wasteful spending. For example, when we attempted to observe the items to confirm their existence, we were told that the unit had decided to convert to Dell computers. As a result, within 1 year of their purchase, these items, as well as a number of other computers, were sent to the Defense Reutilization Marketing Service as excess property. In addition, we questioned purchases of civilian clothing for military assistants and costumes for regional band members. While DOD and the Air Force have issued policy that permits the purchase of these types of clothing and designates them as “uniforms,” we believe this clothing represents personal preference attire and should be paid for by the employees. For example, in addition to standard issue uniform clothing items, military assistants are permitted to purchase slacks or skirts, shirts, and blazers to wear while serving as aides to general officers. In addition, we noted purchases including two tuxedos, six dresses, and earrings as costumes for members of regional Air Force bands. Under Air Force policy, regional bands are permitted to purchase tuxedos and evening gowns to be worn as costumes during performances. While the civilian clothing for military assistants and band costumes are considered government property, which may be reused as appropriate, they are not likely to be reissued to others. Therefore, we question whether taxpayer funds should be used to pay for these items. DOD and Air Force managers told us that they initiated a number of actions during fiscal year 2002 to improve purchase card controls. These initiatives include use of automated U.S. Bank controls to (1) tie cardholder credit limits to allocations of budget authority, (2) automatically deactivate purchase card accounts where monthly statements have not been reconciled and reviewed by the approving officials within prescribed time frames, and (3) cancel purchase card accounts for approving officials with responsibility for excessive numbers of cardholders. According to Air Force officials, they plan to periodically lower the threshold for suspending purchase card accounts until the approving official’s span of control complies with DOD span of control guidelines. In addition, the DOD Comptroller appointed a Charge Card Task Force, which issued its final report on June 27, 2002. The Task Force report included a number of recommendations, including establishing a purchase card concept of operations, accelerating the electronic certification and bill paying process, improving training materials, identifying best practices in areas such as span of control and purchase card management skill sets, and establishing more effective means of disciplining those who abuse the purchase cards. The recommendations address many of the concerns we identified in our Air Force work. In response to our DOD audits, the Congress has recently enacted amendments in section 1007(a) of DOD’s fiscal year 2003 authorization act that address requirements for (1) periodic reviews to be performed to determine whether each purchase card holder has a need for the purchase card, (2) periodic inspector general audits to identify potentially fraudulent, improper, and abusive uses of purchase cards, (3) appropriate training for cardholders and oversight officials, and (4) specific policies regarding the number of purchase cards issued by various organizations, authorized credit limits, and categories of employees eligible to be issued purchase cards. Further, DOD’s fiscal year 2003 appropriation and authorization acts each include requirements that the regulations issued by the Secretary of Defense provide for appropriate disciplinary actions or other punishment to be imposed in cases in which DOD employees violate purchase card regulations or are negligent or engage in misuse, abuse, or fraud with respect to a purchase card, including removal in appropriate cases. A well-controlled purchase card program is a valuable tool for streamlining the government’s acquisition processes. However, the problems we identified with missing receipts, lack of cardholder reconciliations and approving official review, and failure to follow requirements in laws, regulations, and DOD and Air Force policies and procedures resulted in control environment weaknesses that leave the Air Force vulnerable to fraud and improper use of the purchase card, as well as abuse and wasteful spending. Also, although Air Force management has been proactive in establishing improved controls, it has not ensured that installation-level program coordinators—the primary program management officials—have the tools to develop local control systems and adequate oversight activities. Further, installation contract officials have not consistently demonstrated the commitment to enforce established controls. Strengthening the control environment will require a renewed focus and attention and commitment to building a robust purchase card infrastructure. To strengthen the overall control environment and improve internal control over the Air Force purchase card program, we recommend that the following actions be taken. We recommend that the Secretary of the Air Force direct the Assistant Secretary of the Air Force for Acquisition and the Deputy Assistant Secretary for Contracting to take the following actions. Establish specific policies and strategies governing the number of purchase cards to be issued with a focus on minimizing the number of cardholders. Direct all command and installation-level agency program coordinators to review purchase card use with a view toward eliminating unneeded purchase card accounts. Eliminate purchase cards used to facilitate line item accounting. Direct all agency program coordinators to review the number of cardholders who report to an approving official and make the changes necessary so that approving officials do not have responsibility for reviewing more cardholder accounts than allowed by Air Force and DOD policies. Review existing credit limits and monthly spending and develop policies and strategies on credit limits provided to cardholders with a focus on minimizing specific cardholder spending authority and minimizing the federal government’s financial exposure. Deactivate purchase card accounts of alternate cardholders and approving officials when primary cardholders and approving officials are available. Establish specific training courses for cardholders, approving officials, and agency program coordinators tailored to the specific responsibilities associated with each of those roles. Require installation program coordinators to track and monitor corrective actions on purchase card audit and annual surveillance findings and provide periodic status reports to their installation contracting directors. Develop and implement a program oversight system for program coordinators that includes standard activities and analytical tools to be used in evaluating program results. Require reports on annual surveillance results to include an assessment of control environment issues, including the ratio of cardholders to employees, ratio of approving officials to cardholder accounts, ratio of monthly credit limits to actual spending, and number of cardholders and approving officials requiring training. Assess the adequacy of human capital resources devoted to the purchase card program, especially for oversight activities, at each management level, and provide needed resources where appropriate. We also recommend that the Secretary of the Air Force direct the Assistant Secretary of the Air Force for Acquisition and the Deputy Assistant Secretary for Contracting to make to following revisions to Air Force Instruction 64-117, Air Force Government-wide Purchase Card Program. Correct faulty records retention guidance by referring to specific guidelines in the Federal Acquisition Regulation, National Archives and Records Administration federal records retention guidelines, DOD’ Financial Management Regulation, and other federal guidelines as appropriate. Require purchase card program management and administrative records generated by installation program coordinators and approving officials, such as records of cardholder and approving official appointments and training, cardholder delegations of authority, and purchase card surveillances, to be retained for 3 years. Stipulate, in the body of the Instruction, that approving officials are required to have annual purchase card refresher training. Require that the surveillance checklist, which is included in an appendix to the Air Force Instruction, be used to guide and document surveillance results. Require reports on the results of annual surveillances to be signed by installation contracting directors to demonstrate management oversight and “tone at the top.” Require reports on surveillance results to be addressed to unit commanders. Require reports on surveillance results to include recommendations for unit commander action, where approving officials and cardholders have failed to follow Air Force policy—particularly policy related to federal regulations, such as micropurchase requirements and mandated sources of supply. To resolve noncompliance with requirements in law for proper certification of purchase card payments, we recommend that the Secretary of the Air Force take the following actions. Direct the Assistant Secretary of the Air Force for Acquisition and the Deputy Assistant Secretary for Contracting to work with the Under Secretary of Defense (Comptroller) to resolve inconsistencies between DOD and Air Force policies and procedures for reconciling purchase card statements prior to payment. Develop a strategy for achieving Air Force compliance with requirements in the law that DOD purchase card policies and procedures require reconciliation of purchase card statements prior to payment. We recommend that the Secretary of the Air Force direct the Assistant Secretary of the Air Force for Acquisition and the Deputy Assistant Secretary for Contracting to revise Air Force Instruction 64-117 to provide cardholders, approving officials, and installation program coordinators with detailed instructions on the following specific control activities. Establish appropriate criteria, including types of items and dollar thresholds for documenting independent receiving and acceptance of items obtained with a purchase card. Establish specific procedures for documenting independent receiving, such as requiring the approving official or supervisor to sign and date the vendor invoice, sales receipt, or credit card receipt, or requiring the approving official to sign the cardholder’s monthly purchase log to verify that items noted as having been received were actually received. Require cardholders to maintain documentation of timely and independent receiving and acceptance of items obtained with a purchase card. Require reconciliation of monthly purchase card statements associated with accounts that were “shut down” (suspended) in July 2002 due to lack of cardholder reconciliation and approving official review. Verify that all potentially fraudulent and erroneous transactions that have been detected are disputed and properly resolved. Require timely cardholder notification to the property accountability officer of pilferable property, such as fax machines, digital cameras, and palm pilots obtained with the purchase card. Encourage installation contracting officers to consider the benefits of central purchasing and receiving and acceptance of computer equipment by installation information technology units to facilitate recording computer equipment in accountable property records at the time it is received. We also recommend that the Deputy Assistant Secretary for Contracting revise Air Force Instruction 64-117 to define and list examples of sensitive and pilferable property purchased with a government purchase card, including cell phones, digital cameras, fax machines, palm pilots, and copiers and printers, and require prompt recording of these items in installation property systems. In addition, we recommend that the Assistant Secretary of the Air Force for Logistics establish policies and procedures for recording all pilferable and sensitive property, including digital cameras, palm pilots, and cell phones, in installation accountable property records. At a minimum, require installations to follow DOD policies and procedures on accountable property. We recommend that the Assistant Secretary of the Air Force for Financial Management (Comptroller) direct the Air Force Audit Agency and Air Force Office of Special Investigations to establish an Air Force-wide database of known fraud cases by type of fraud, including purchase card fraud, that can be used to identify systemic weaknesses and deficiencies in existing internal control and to develop and implement additional control activities, if warranted or justified. We recommend that the Assistant Secretary of the Air Force for Acquisition and the Deputy Assistant Secretary for Contracting take the following actions. Establish an Air Force-wide database of known purchase card fraud cases by type of fraud, including vendor fraud and compromised accounts, that can be used to identify deficiencies in existing internal control and implement additional control activities, if warranted. Identify vendors with which the Air Force used purchase cards to make frequent, recurring purchases, evaluate Air Force purchasing practices with those vendors, and where appropriate, develop contracts with those vendors to optimize Air Force purchasing power. Review organizational use of the purchase card and revoke purchase cards issued to organizations that do not have authority to participate in the governmentwide purchase card program. Cancel convenience check privileges of cardholders who have continued to improperly use convenience checks. Require accounting adjustments to be made to correct transactions that were charged to the wrong appropriation account with respect to fiscal year and purpose of the expenditures. Establish appropriate, consistent Air Force-wide policy as a guide for taking disciplinary actions with respect to cardholders and approving officials who make or approve fraudulent, improper, or abusive purchase card transactions. Require cardholders and/or approving officials to reimburse the government for any unauthorized or erroneous purchase card transactions that were not disputed. Require benefiting individuals to reimburse the government for the cost of any personal items that they requested or directed a cardholder to purchase for them. We also recommend that the Under Secretary of Defense (Comptroller) direct the Charge Card Task Force to assess the above recommendations, as well as the strengths in the Air Force purchase card program that we identified, and to the extent applicable, incorporate them into its future recommendations to improve purchase card policies and procedures throughout DOD. On December 13, 2002, DOD’s Purchase Card Joint Program Management Office and the Air Force provided oral comments on a draft of this report. DOD and Air Force purchase card officials concurred on 29 of our 39 recommendations and partially concurred with the remaining 9 recommendations. At the time we finalized our work, DOD had not provided a response to our remaining recommendation that the Charge Card Task Force assess the recommendations in this report and incorporate them to the extent applicable, into its future recommendations to improve purchase card policies and procedures throughout DOD. Of the 9 recommendations involving partial concurrences, the DOD and Air Force officials (1) agreed in substance with 5 of our recommendations, (2) noted that the Air Force Office of Special Investigations and DOD Inspector General have responsibility for actions on two of our recommendations related to establishing a database of fraud cases by type of fraud that can be used to identify systemic weaknesses and deficiencies in controls, and (3) indicated alternative actions have been initiated on the remaining 2 recommendations. With regard to agreement on the substance of our recommendations, Air Force officials stated that they would (1) suspend alternate accounts when primary cardholders and billing officials are available, (2) revise the Air Force purchase card Instruction to require reports on purchase card surveillance results to be signed by contracting squadron commander or chief of the contracting office, (3) require reconciliation of monthly purchase card statements associated with accounts that were “shut down” (suspended) in July 2002, (4) issue a policy letter to encourage installation Contracting Officers to consider the benefits of central purchasing and receiving and acceptance of computer equipment by installation Information Technology units, and (5) revise the Air Force purchase card Instruction to define and list examples of sensitive and pilferable property and establish clear accountability and/or visibility criteria. With regard to two recommendations related to establishing an Air Force- wide database of known fraud cases by type of fraud to identify and correct systemic weaknesses and deficiencies in existing internal control, Air Force officials stated that the Air Force Office of Special Investigations in conjunction with the other Defense Criminal Investigative Organizations now reports quarterly information on purchase card investigations to the DOD Inspector General. The officials told us that the DOD Inspector General has been directed to develop a centralized purchase card database on known fraud cases and audit results that can be used to identify potential deficiencies in existing internal controls. They said that the Air Force will evaluate the Air Force cases and audits to determine the effectiveness of existing internal controls and implement additional control activities, if warranted. Alternative Air Force actions relate to our recommendations that the Deputy Assistant Secretary for Contracting (1) review organizational use of the purchase card and revoke purchase cards issued to organizations that do not have authority to participate in the governmentwide purchase card program and (2) establish appropriate, consistent Air Force-wide policy as a guide for taking disciplinary actions with respect to cardholders and approving officials who make or approve fraudulent, improper, or abusive purchase card transactions. With regard to action on the first recommendation, Air Force officials stated that the Chaplain Service has authority to issue its own policies and procedures, including purchase card authority. However, they stated that the Head of the Air Force Chaplain Office will recommend reinstatement of the Chaplain Funds in DOD Directive 1015.1, Establishment, Management, and Control of Nonappropriated Fund Instrumentalities, which is in the process of being updated to reflect current DOD and Air Force policies regarding the government purchase card. The Air Force also agreed to review organizational use of the purchase card and revoke purchase cards issued to organizations that do not have authority to participate in the governmentwide purchase card program. With regard to action on our recommendation to establish Air Force-wide policy as a guide for taking disciplinary actions, Air Force officials noted the existing guidance in the Air Force purchase card Instruction, which is discussed earlier in this report. They also stated that the Deputy Assistant Secretary of the Air Force (Contracting) has issued a memorandum requiring that a summary of each purchase card fraud and each instance of repeated misuse of the purchase card be briefed quarterly by the contracting squadron commander to the installation commander along with the disciplinary action taken. The Congress recently enacted provisions in DOD’s appropriation and authorization acts that require the Secretary of Defense to establish guidelines and procedures for disciplinary actions to be taken against department personnel for improper, fraudulent, or abusive use of government purchase cards. In addition, the Air Force provided technical comments on our draft report stating that it disagreed with our position that civilian clothing for enlisted aides and costumes for military band members represented abusive or questionable transactions. The Air Force referred to its Enlisted Aide Handbook and Air Force Instruction 36-2123 as authority for purchasing civilian attire, which is designated as a “uniform” for enlisted aides. Air Force officials pointed out that their handbook specifically states, “operation and maintenance funds are used when purchasing uniform items” and “local purchase is authorized and encouraged.” Air Force officials also stated that band costumes are authorized purchases in accordance with Air Force Instruction 35-101 and that band costumes may be reused, as appropriate. The Air Force’s position appears to be that any item defined in its policy as a uniform or band costume can be purchased using a purchase card and paid for with appropriated funds. We continue to believe that these clothing purchases are questionable because the Air Force did not adequately explain the circumstances of the purchases, such as the purpose of clothing and the vendor. The Air Force’s policy opens the door for abuse and its implementation merits close scrutiny. As agreed with your offices, unless you announce the contents of this report earlier, we will not distribute this report until 30 days from its date. At that time, we will send copies to interested congressional committees; the Secretary of Defense; the Under Secretary of Defense for Acquisition, Technology, and Logistics; the Under Secretary of Defense (Comptroller); the Secretary of the Air Force; the Assistant Secretary of the Air Force for Acquisition; the Deputy Assistant Secretary of the Air Force for Contracting; the Assistant Secretary of the Air Force for Logistics; the Director of the Defense Finance and Accounting Service; and the Director of the Office of Management and Budget. We will make copies available to others upon request. The report also will be available free of charge on GAO’s Web page at http://www.gao.gov. Please contact Gregory D. Kutz at (202) 512-9505 or kutzg@gao.gov, John Ryan at (202) 512-9587 or ryanj@gao.gov, or Gayle L. Fischer at (202) 512- 9577 or fischerg@gao.gov, if you or your staff have any questions concerning this report. Major contributors to this report are acknowledged in appendix IV. We audited the effectiveness of the Air Force’s internal controls and payment of its fiscal year 2001 purchase card transactions. The Air Force’s purchase card program is the smallest of the three services, with fewer transactions and dollars spent than the Army or the Navy. We selected our four case study locations by identifying major commands with the largest purchase card sales volume and number of transactions. We selected major Air Force commands that accounted for about 69 percent of total purchase card charges and 65 percent of total transactions for fiscal year 2001. We then selected one installation within each of the four commands based on the magnitude of purchase card activity (sales volume and number of transactions). We also considered the results of prior Air Force Audit Agency work. We selected the following Air Force installations for our case study work. At the four Air Force installations, we evaluated the policies and procedures used to guide the purchase card program, and we evaluated the activities they engage in to oversee the program. We used a case study approach to evaluate the local purchase card program, and our work consisted of three major segments—(1) an assessment of the overall control environment, including the adequacy of the Air Force’s policies and procedures, (2) an evaluation of the effectiveness of key internal control activities, and (3) a determination of whether evidence existed of potentially fraudulent, improper, and abusive or questionable transactions. Finally, we assessed management actions taken in fiscal year 2002 to improve purchase card controls. To assess the overall control environment, we used as our primary criteria applicable laws and regulations; our Standards for Internal Control in the Federal Government (GAO/AIMD-00-21.3.1, November 1999); and our Internal Control Standards: Internal Control Management and Evaluation Tool (GAO-01-1008G, August 2001). To assess the management control environment, we applied the fundamental concepts and standards in GAO’s Internal Control Standards to the practices followed by management. To test the implementation of key control activities during fiscal year 2001 at the four installations we audited, we obtained from DOD, U.S. Bank’s database of Air Force purchase card transactions from October 1, 2000, through September 30, 2001. We did not verify the accuracy of U.S. Bank’s database. We selected stratified random probability samples of 150 to 152 purchase card transactions from the population of Air Force transactions for each case study location. With these statistically valid samples, each transaction in the four locations’ populations had a nonzero probability of being included, and that probability could be computed for any transaction. Within each installation we stratified the population of transactions by the dollar value of the transaction and by whether the transaction was likely to be for a purchase of computers and related equipment. Each sample transaction for an installation was subsequently weighted in the analysis to account statistically for all the transactions in the population of that installation, including those that were not selected. For each transaction sampled, we tested whether key internal control activities had been performed. For each control activity tested, we projected an estimate of the percent of transactions for which the control activity was not performed, for each installation. Because we followed a probability procedure based on random selections of transactions, our sample for each installation is only one of a large number of samples that we might have drawn. Since each sample could have produced different estimates, we express our confidence in the precision of our particular samples’ results (that is, the sampling error) as 95 percent confidence intervals. These are intervals that would contain the actual population value for 95 percent of the samples we could have drawn. As a result, we are 95 percent confident that each of the confidence intervals in this report will include the true (unknown) values in the study populations. Although we projected the results of our samples to the populations of transactions at the respective case study locations, the results cannot be projected to the population of Air Force transactions or installations as a whole. Tables 13 through 20 present (1) the results of our tests for one or more control attributes, (2) the point estimates of the failure rate for the attributes, (3) the two-sided 95 percent confidence intervals for the failure rates for each attribute, (4) our assessments of the effectiveness of the controls, and (5) the relevant lower and upper bounds of a one-sided 95 percent confidence interval for the failure rate. All numbers in these tables are rounded to the nearest percentage point. We use one-sided confidence bounds to classify the effectiveness of a control activity. If the one-sided upper bound does not exceed 5 percent, then the control activity is effective. If the one-sided lower bound exceeds 10 percent, then the control is ineffective. Otherwise, we say that the control is partially effective. Partially effective controls may include those for which there is not enough evidence to assert either effectiveness or ineffectiveness. For example, if we were 95 percent confident that the failure rate for a particular control is less than 3 percent, we would categorize that control activity as “effective” because 3 percent is less than the 5 percent standard. Similarly, if we were 95 percent confident that the failure rate for a particular control is greater than 72 percent, we would categorize that control as “ineffective” because 72 percent is greater than the 10 percent standard. Table 13 shows the results of our tests of controls for documenting cardholder and approving official appointments. Local commanders appoint cardholders and approving officials for their units and notify the installation program coordinator who then schedules these individuals for purchase card training. Table 14 shows the results of our tests of controls for documenting initial training of cardholders and approving officials. Air Force Instruction 64- 117 requires cardholders and approving officials to receive purchase card training before they can be assigned a purchase card account. Table 15 shows the results of our tests of controls for documenting cardholder delegations of purchasing authority. After cardholders complete purchase card training, installation program coordinators are to prepare a letter of delegation of purchasing authority indicating the cardholder’s transaction level spending limit and monthly credit limit. Table 16 presents the results of our tests for documentation of advance purchase authorization. Air Force Instruction 64-117 requires advance authorization for purchases of certain items, including computer and communication equipment, video equipment, medical items, and hazardous materials. Estimates for this table are based only on the sample transactions for which advance authorization of purchases was required. Table 17 presents the results of our tests for documentation of independent receiving and acceptance, by someone other than the cardholder, of goods and services purchased with a government purchase card. This requirement is not specifically addressed in DOD policy or Air Force purchase card program Instruction 64-117. We believe that independent documentation of receipt of items purchased by a cardholder is a basic internal control activity that provides additional assurance to the government that purchased items are not acquired for personal use and that they come into the possession of the government. Table 18 presents the results of our tests for documentation of cardholder reconciliations. Cardholder reconciliations are key to identifying potentially fraudulent transactions resulting from compromised accounts, duplicate or improper vendor charges, and errors. As evidence that purchase card statements were reconciled, we accepted check marks, notes, sequential numbering, and numbering systems that tied transactions on the statement to items on the cardholders’ purchase card logs. Table 19 presents the results of our tests of timely approving official review of cardholders’ monthly, reconciled statements. Approving official review is a recognized control activity at all levels of the purchase card program, and the approving official review process has been described as the first line of defense against misuse of the card. DOD’s Purchase Card Joint Program Management Office and Air Force Instruction 64-117 recognize that approving official review of monthly purchase card statements is central to ensuring that purchase card transactions are appropriate. The Air Force Instruction requires that approving officials review and approve reconciled cardholder statements within 15 days of receipt of the monthly statement, but no later than the 15th day of the following month. Table 20 shows the results of our tests for documentation of supporting invoices or receipts. GAO’s Internal Control Standards state, “all transactions and other significant events need to be clearly documented, and the documentation should be readily available for examination. All documentation and records should be properly managed and maintained.” Without a receipt, independent evidence of the description and quantity of what was purchased and the price paid is not available. In testing for evidence of a receipt, we accepted either the original or a copy of the invoice, sales slip, or other store receipt. We also tested nonrepresentative selections of accountable property items that were included in our sampled transactions. Because some transactions were for property items that were physically located off base, we elected to perform our test work on property items that were assigned to the base. We tested whether these items had been recorded in the installation’s accountable property records, including unit-level records, in a timely manner and whether the installation could demonstrate the item’s existence. We confirmed existence of the items we tested through physical observation. In addition to our audit of statistical samples of transactions at the four case study installations, we also used data mining techniques to identify other selected transactions at the four locations and throughout the Air Force’s fiscal year 2001 purchase card transactions to determine if indications exist of potentially fraudulent, improper, and abusive or questionable purchase card activity. Our data mining included identifying transactions with certain vendors were more likely to sell items that would be unauthorized or that would be personal items. We also based our selection on the nature, dollar amount, date, and other identifying characteristics of the transactions. Because of the large number of transactions that met these criteria, we did not look at all potential abuses of the purchase card. For a small number of these transactions at each of the four installations and from the Air Force-wide database, we requested limited documentation, usually the supporting invoice, that could provide additional indications as to whether the transactions were potentially fraudulent, improper, or abusive or questionable. If the additional documentation indicated that the transactions were likely proper and valid, we did not pursue further documentation. If the additional documentation was not provided, or if it indicated further issues related to the transactions, we obtained and analyzed additional documentation or information about these transactions. While we identified some potentially fraudulent, improper, and abusive or questionable transactions, our work was not designed to identify, and we cannot determine, the extent of potentially fraudulent, improper, or abusive transactions. For those potentially fraudulent transactions that had been or were being investigated at the four audited installations, we discussed the cases with the investigators and/or obtained records and reports on the investigations. We also interviewed purchase card officials and Air Force criminal investigators to identify other Air Force purchase card fraud cases that had been or were being investigated. We did not audit the Defense Finance and Accounting Service’s purchase card payment process. We also did not audit electronic data processing controls used in processing purchase card transactions. The installations received hard copy paper monthly bills containing the charges for their purchases and used manual processes for much of the period we audited, which reduced the relevance of auditing electronic data processing controls. We briefed DOD managers, including officials in DOD’s Purchase Card Joint Program Management Office, major command purchase card program coordinators, and purchase card program officials at the installations we audited on the details of our audit, including our objectives, scope, and methodology and our findings. On November 20, we requested comments on a draft of this report. We obtained oral comments from DOD and Air Force purchase card officials on December 13, 2002, and have summarized those comments in the “Agency Comments and Our Evaluation” section of this report. We conducted our audit work from January through mid-November 2002 in accordance with U.S. generally accepted government auditing standards, and we performed our investigative work in accordance with standards prescribed by the President’s Council on Integrity and Efficiency. The Air Force purchase card program is part of the governmentwide Commercial Purchase Card Program established to streamline federal agency acquisition processes by providing a low-cost, efficient vehicle for obtaining goods and services directly from vendors. Under the General Services Administration’s blanket contract, the Air Force has contracted with U.S. Bank for its purchase card services. DOD reported that it used purchase cards to make about 10.7 million transactions for goods and services at a cost of over $6.1 billion. During this same period, the Air Force reported that it used government purchase cards to make about 3 million transactions at a cost of about $1.4 billion. This represents about 23 percent of DOD’s activity for fiscal year 2001. Air Force purchase card transactions were made using about 80,000 VISA cards issued to civilian and military employees. DOD has mandated the use of the purchase card for all purchases at or below $2,500 and has authorized the use of the card to pay for specified larger purchases. For example, the purchase card may be used to purchase authorized supplies, equipment, and nonpersonal services up to the $2,500 micropurchase threshold. If authorized to make purchases above $2,500, cardholders not in contracting organizations are to use the government purchase card only to obtain items from prepriced contracts and other pricing agreements, such as the Federal Supply Schedule, blanket purchase agreements, and Indefinite Delivery/Indefinite Quantity contracts. Purchases over the $2,500 micropurchase threshold and up to the simplified acquisition threshold of $25,000 must be in accordance with streamlined acquisition guidelines in the Federal Acquisition Regulation (FAR). The purchase card should normally not be used for cash advances; travel-related purchases; rentals or leases of land or buildings; utility services; or hazardous/dangerous items, such as explosives, munitions, toxins, and firearms. The purchase card can be used for both micropurchases and payment of other purchases. Although most cardholders have limits of $2,500, some have limits of $25,000 or higher. The Federal Acquisition Regulation, Part 13, “Simplified Acquisition Procedures,” establishes criteria for using purchase cards to place orders and make payments. DOD has a supplement to this regulation that contains sections on simplified acquisition procedures. U.S. Treasury regulations govern purchase card payment certification processing and disbursements. DOD’s Purchase Card Joint Program Management Office, which is in the Office of the Assistant Secretary of the Army for Acquisition Logistics and Technology, has issued departmentwide guidance related to the use of purchase cards. However, each service has its own policies and procedures governing the purchase card program. The Air Force purchase card program operates under federal Air Force guidance as the policy and procedural foundation for its purchase card program. The Air Force headquarters Acquisition Office is responsible for the overall management of the Air Force’s purchase card program. The Acquisition Office has published servicewide guidelines in Air Force Instruction 64-117, Governmentwide Purchase Card Program, dated December 6, 2000, to establish responsibilities and procedures and provide administrative guidance for its government purchase card operations. Under the Air Force instruction, each Air Force command’s head contracting officer authorizes agency purchase card program coordinators in local Air Force units to obtain purchase cards and establish credit limits. The program coordinators are responsible for administering the purchase card program within their designated span of control and serve as the communication link between Air Force units and the purchase card-issuing bank. The other key personnel in the purchase card program are the approving officials and the cardholders. They are responsible for implementing internal controls to ensure that transactions are appropriate. Figure 2 illustrates the general design of the purchase card processes for the Air Force. The overall process begins with the cardholder ordering or purchasing goods or services. Each Air Force installation’s Financial Services Office certifies monthly bills for payment upon receipt. After certification, the Financial Services Offices notify the Defense Finance and Accounting Service that monthly purchase card statements are ready for payment. The process ends with cardholder reconciliation and approving official review and approval of monthly purchase card statements after the bills have been paid. Any invalid transactions identified during the reconciliation and review process are to be disputed first with the vendor, and if not resolved, a “Disputed Item” form is to be submitted to U.S. Bank for credit. Purchase cardholders are delegated limited contracting officer-ordering responsibilities, but they do not negotiate or manage contracts. When a supervisor requests that a staff member receive a purchase card, the agency program coordinator is to first provide training on purchase card policies and procedures and then establish a credit limit and issue a purchase card to the staff member. After receiving training, cardholders are issued a purchase card, which bears their name and the account number that has been assigned to them. The cardholder is expected to safeguard the purchase card as if it were cash. Each cardholder has an established daily and monthly credit limit and is designated to make purchases at selected types of vendors. Cardholders use purchase cards to order goods and services for their units as well as their customers. Cardholders may pick up items ordered directly form the vendor or request that items be shipped directly to receiving locations or end users. The approving official is responsible for providing assurance that all purchases made by the cardholders within his or her cognizance were appropriate and that the charges are accurate. The approving official is supposed to resolve all questionable purchases with the cardholder. In the event an unauthorized purchase is detected, the approving official is supposed to notify the agency program coordinator and other appropriate personnel within the command in accordance with the command procedures. Under governmentwide guidelines, agencies are required to first attempt to resolve invalid transactions with the vendors. Transactions that are not resolved with the vendors may be disputed with the U.S. Bank. The purchase card payment process begins with receipt of the monthly purchase card billing statements from the bank. The Air Force uses a pay and confirm process whereby the monthly purchase card statements received from U.S. Bank are certified as proper for payment by each installation’s Financial Services Office within 3 to 5 business days and forwarded to the Defense Finance and Accounting Service (DFAS) for payment. Under guidelines in the Air Force purchase card Instruction, cardholders are required to review and reconcile their monthly purchase card statements within 5 days of receipt, and approving officials are required to review cardholders’ monthly statements as reconciled and dispute any invalid charges within 15 days of receipt, but no later than the 15th day of the following month. DFAS effectively serves as a payment processing service and relies on the Air Force Financial Services Office certification of the consolidated monthly bill for each installation as support to make the payment. The DFAS vendor payment system then makes a single payment to U.S. Bank by electronic funds transfer for each Air Force installation’s monthly purchase card expenditures. During the summer of 2001, the Air Force began implementing U.S. Bank’s Customer Automation and Reporting Environment (CARE) system. CARE provides several automated purchase card management features, including on-line cardholder purchase logs, transaction histories, and management reporting and inquiry functions. CARE management reports identify managing accounts, approving officials and cardholders’ accounts, transaction history, rejected transactions, and the reasons for the rejections, such as transactions in excess of the cardholder’s credit limit, potential split purchases, inactive accounts, and blocked merchant category codes. During fiscal year 2002, the Air Force implemented additional purchase card management controls using U.S. Bank’s CARE system. These enhanced controls include an automated link of cardholder credit limits to budgetary funding authorizations to help ensure that purchase card activity will not exceed available funds. Another control feature monitors approving official span of control over cardholder accounts to help ensure that installations are meeting DOD and Air Force goals for reducing and eliminating excessive approving official span of control. The enhanced controls also include automated tracking of cardholder review of individual transactions on their monthly purchase card statements and billing officials’ approval of those statements. Statements cannot be approved until the cardholders have physically “touched” (clicked on) each transaction on the computer screen to indicate that they have reviewed the transactions. CARE automatically shuts down the accounts of billing officials who have not approved their consolidated statements within 60 days. No charges can be processed against these accounts until they are reviewed/reconciled and approved. U.S. Bank has also shut down accounts that indicate potential fraud. For example, the bank has shut down Air Force purchase card accounts due to out of state transactions on weekends and other suspicious patterns of activity that indicate potentially compromised accounts. The following examples illustrate the types of cases investigated by the Air Force Office of Special Investigations. During May 2000, after a Nellis AFB approving official retired, the new approving official’s review of a cardholder’s monthly statements detected questionable transactions for which no receipts were available. The new official notified contracting officials who contacted Air Force investigators. The cardholder, an E-4, senior airman, used her government purchase card to obtain between $5,000 and $20,000 in merchandise, which she then stole and sold, pawned, or left at her residence. When questioned by her supervisor, who was the approving official, the airman admitted that she stole the items she had purchased with the government purchase card. When confronted by investigators, the airman refused to identify specific items of equipment that she stole. For example, the airman only stated that she purchased items from Home Depot and a local hardware store. Nellis AFB contracting officials told us that the cardholder used the purchase card to buy fax machines, calling cards, cordless telephones, digital cameras, chairs, and laser jet printers and sold them to pawnshops and at swap meets for personal gain. The day before being court-martialed, the airman paid back approximately $7,100 to the government. The airman waived her Article 31 rights and pleaded guilty to purchasing and pawning over $7,100 worth of personal items on her purchase card between May 1, 1999, and May 1, 2000. The airman was convicted of larceny in a General Court-Martial and sentenced on March 17, 2001, resulting in a reduction in grade to E-1, $14,768 in military pay forfeiture, 7 month’s confinement, and a Bad Conduct Discharge. This fraud was able to occur and continue because the first approving official apparently had not reviewed the cardholder’s monthly purchase card statements and, therefore, had not detected or questioned the fraudulent transactions. On September 27, 2000, the purchase card program coordinator at Misawa Air Base, Japan, notified Air Force investigators about possible government purchase card fraud. The program coordinator’s audit of a cardholder’s account had revealed numerous undocumented charges/purchases. Air Force investigators determined that the fraud was committed by an E-4, senior airman, in the Civil Engineering Squadron whose own purchase card had been revoked for misuse. The airman took advantage of a co-worker’s inexperience and limited English language capability to obtain and improperly use her purchase card. The cardholder was a Japanese citizen employed by the Air Force. In early October 2000, the cardholder gave investigators a signed, sworn statement, in which she related that from approximately March through September 2000, an E-4, senior airman in the Civil Engineering Squadron had repeatedly used the cardholder’s government purchase card to pay bills and make purchases, often without the cardholder’s knowledge. While the cardholder was aware that some of the purchases were made at the squadron’s Self Help Store, she told Air Force investigators that she had no knowledge of the types of items purchased. The cardholder also stated that when she inquired as to the nature of the purchases, the senior airman told her that he would take care of purchases using the card because of her limited English language capabilities. No attempt was made to correct this misuse of the purchase card until the bank declined a large purchase of approximately $50,000 due to the high dollar amount. The declined transaction flagged the account, and the contracting squadron initiated an inquiry. Contracting squadron records showed that the senior airman had purchased approximately $10,000 of merchandise using the Japanese cardholder’s account. Air Force investigators’ review of Self Help Store records failed to identify what was purchased and/or if the items had ever been received at the store. As a result, investigators were unable to determine whether criminal abuse had occurred. However, because the investigation did identify procedural violations, investigators referred this matter to the command for action. The airman subsequently was reassigned from the civil engineering squadron. The airman was able to use the purchase card for unauthorized transactions because the cardholder failed to maintain custody of the purchase card. On August 14, 2001, investigators assigned to the 325th Security Forces Squadron at Tyndall AFB, Florida, received an allegation that a WG-5 maintenance employee was using his government purchase card to buy personal use items. According to a witness, the cardholder had bragged about using his government purchase card to purchase tools, a television set, and a computer for his personal use. The witness told the investigators that he had accompanied the cardholder to a local hardware and auto parts stores in Panama City, Florida, and had observed the cardholder using his government purchase card to buy tools and other items for his son’s vehicle. The cardholder subsequently gave one of the tools to the witness and told him to keep it for his personal use. Squadron investigators coordinated with a local hardware store and obtained security videotape as evidence. The video depicted the cardholder, who was present with the witness, purchasing a drill bit, which the cardholder subsequently gave to the witness. The cardholder also allegedly used his government purchase card to pay for major engine repairs to his son’s vehicle. The investigators’ preliminary review of the cardholder’s account disclosed several unauthorized charges for dental work totaling approximately $1,800 and charges for an automotive engine repair for $1,181. Numerous additional suspect charges were identified on the cardholder’s purchase card account. The investigation, which is ongoing, has identified an estimated $5,000 in fraudulent purchases. Coordination with the cardholder’s command disclosed indications of an almost total lack of oversight on the part of the approving official. Staff making key contributions to this report include Bertram J. Berlin, James D. Berry, Jr., Cindy Barnes-Brown, Francine DelVecchio, Carlos M. Garcia, Kenneth M. Hill, Jeffrey A. Jacobson, Noel J. Lance, Richard A. Larsen, James D. Moses, Jerrod J. O’Nelio, Mark F. Ramage, Kenneth H. Roberts, Sidney H. Schwartz, and Gary R. Wiggins. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to GAO Mailing Lists” under “Order GAO Products” heading. | In July 2001 and March 2002, GAO testified on significant breakdowns in internal controls over purchase card transactions at two Navy sites that resulted in fraud, waste, and abuse. As a result, the Congress asked GAO to audit purchase card controls at DOD. This report focuses on Air Force purchase card controls and addresses whether the overall management control environment and key internal controls were effective in preventing potentially fraudulent, improper, and abusive purchase card transactions. Weaknesses in the overall control environment and breakdowns in key controls relied on to manage the purchase card program leave the Air Force vulnerable to fraud, waste, and abuse. Major contributors to the weak control environment included excessive numbers of purchase cards, with about one purchase card for every seven employees, approving official span of control that far exceeded DOD guidelines, and credit limits that were 12 to 20 times higher than actual spending. Of the five key control activities tested, the Air Force had significant control breakdowns in at least three of them (1) receiving of goods and services by someone other than the card holder, (2) cardholder reconciliation, and (3) approving official review of the cardholder's reconciled statements. The highest failure rates--69 to 87 percent--at the four locations tested related to approving official review--viewed by DOD as the first line of defense against misuse of the purchase card. The control breakdowns resulted in purchases that were potentially fraudulent, improper, and abusive or questionable. GAO also identified potentially fraudulent transactions for which supporting documentation was not available to show the quantity and type of items purchased. Air Force officials could not recall the purpose of these transactions. In addition, GAO identified (1) improper transactions related to weaknesses in controls relied on to prevent splitting purchases into multiple transactions to circumvent micropurchase and cardholder transaction limits and (2) the failure to use mandated sources of supply. Finally, GAO found that cardholders who abused or improperly used the purchase card were not subject to strong disciplinary action or consequences. The Air Force has taken a number of steps to improve control over the purchase card program. For example, it implemented automated controls during fiscal year 2002 to help monitor approving official span of control, credit limits, and cardholder reconciliation and approving official review of monthly statements. If effectively implemented, these controls should help strengthen the overall Air Force purchase card control environment as well as controls over statement reconciliation and approval. |
The United States has long been open and receptive toward foreign investment. In 2011, the President issued Executive Order 13,577, creating the SelectUSA Initiative, in part, to encourage foreign investment in the United States. In addition, legislation introduced in the 114th Congress is aimed, in part, at attracting more foreign investment in real estate through changes in the tax code. According to some real estate companies we interviewed, investors are attracted to government-leased buildings because they provide a safe and reliable rate of return. Representatives from one real estate company added that the advantages of leasing to the federal government include the ability of investors to receive higher rates of return, compared with Treasury bonds, that the real estate also appreciates in value, and that the projects help promote underdeveloped areas. A representative from another firm said that foreign investors are also interested in the government’s long- term lease of space. Foreign investment in U.S. commercial office buildings has been increasing in recent years. According to Real Capital Analytics, annual foreign investment in significant U.S. commercial office buildings increased from $11.7 billion in 2011 to $26.5 billion in 2015. As shown in figure 1, of those amounts, investors from Canada, Germany, China, Norway, and South Korea invested the most during those 5 years. For example, data from Real Capital Analytics indicated that in 2015, foreign investors purchased 336 commercial office buildings in the United States, 106 of which were purchased by Chinese investors for a total of $2.8 billion. Chinese investment in U.S. commercial office buildings was part of overall increased Chinese investment in the United States. According to the National Committee on U.S.-China Relations and the Rhodium Group, overall Chinese investment in the United States could reach $30 billion in 2016, up from $15 billion in 2015 and $4.9 billion in 2011. GSA is responsible for leasing space for many agencies of the federal government and has about 8,300 leases of space. GSA develops, coordinates, issues, and administers real property policies, guidelines, and standards for property under its custody and control and for agencies operating under, or subject to, the authorities of the GSA Administrator. As of March 2016, GSA had about 1,400 leases of high-security space in about 850 buildings. Since 2008, GSA has leased more space than is federally-owned and under its custody and control. For example, in fiscal year 2015, GSA leased 190.8 million square feet of space, compared with having custody and control of 183.2 million square feet of federally-owned space. Overreliance on costly leasing is one of the major reasons that federal real property management remains on GAO’s high-risk list. Our work over the years has shown that leasing space often costs the government more than owning buildings, especially for long-term space needs. GSA and DHS’s Federal Protective Service (FPS) have joint responsibility for protecting federal facilities held or leased by GSA. FPS has primary responsibility for the security and protection of buildings and their occupants, whereas GSA has primary responsibility for security fixtures, maintenance, and building access. Some agencies also use their own police forces to protect their facilities. FPS and the client agencies set the facility security levels in consultation with GSA. As discussed later, these levels determine the frequency of required risk assessments of facilities, among other things. CFIUS, an interagency committee chaired by the Treasury Department, reviews transactions that could result in foreign control of a U.S. business, which could include a company that leases space to the federal government, in order to determine the effect of such transactions on the national security of the United States. Under the Foreign Investment and National Security Act of 2007 (FINSA), CFIUS shall review “any merger, acquisition, or takeover…by or with any foreign person which could result in foreign control of any person engaged in interstate commerce in the United States” to determine the effects of such transaction on the national security of the United States. To review a foreign acquisition of a U.S. business, CFIUS must determine that the acquisition is a “covered transaction.” CFIUS may recommend that the President suspend or prohibit any covered transaction that presents unresolved national security concerns. Under FINSA, the President may block a foreign acquisition that raises national security concerns, but this has rarely occurred. CFIUS’s reviews are confidential and protected from public disclosure. FinCEN is one of the Treasury Department’s primary bureaus to oversee and implement policies to prevent and detect money laundering. FinCEN uses anti-money laundering laws such as the Bank Secrecy Act to require reporting and recordkeeping by banks and other financial institutions. The regulation and enforcement of the Bank Secrecy Act involves several different federal agencies, including FinCEN, the federal depository institution regulators—the Federal Deposit Insurance Corporation, Federal Reserve, National Credit Union Administration, and the Office of the Comptroller of the Currency—the Internal Revenue Service (IRS), the Commodity Futures Trading Commission, DOJ, and SEC. Because owning property can provide access to the buildings and building systems, foreign ownership of government-leased space can pose security risks particularly regarding cybersecurity. In 2014, we reported that federal facilities are vulnerable to cyber attacks to their building and access control systems (e.g., heating, ventilation, and air- conditioning; surveillance cameras; and electronic card readers), which could provide unauthorized access to the facilities, endanger the occupants, and provide access to information systems. We found that insider threats—which can include disgruntled employees, contractors, or other persons abusing their positions of trust—represent a significant threat to building and access control systems, given insiders’ access to and knowledge of these systems. These insider threats can also include the owners and the people they employ to operate the buildings. In addition, we reported that nations use cyber tools as part of their information-gathering and espionage activities. In our 2014 report, we recommended that the Secretary of Homeland Security direct the Interagency Security Committee (ISC), housed within DHS, to incorporate the cyber threat to building and access control systems into ISC’s list of undesirable events in its Design-Basis Threat report, which informs agencies about the threats they face. DHS implemented our recommendation in 2016. DHS has identified or received reports of cyber attacks on government facilities in recent years such as incidents at a state law enforcement crime lab and a wastewater plant. Furthermore, in March 2016, the U.S. Attorney for the Southern District of New York announced the indictment of seven Iranians in a cyber attack on a city- owned dam in Rye, NY. In this regard, multiple sources cite China as a primary source of cyber intrusions. In 2011, the Office of the National Counterintelligence Executive reported that “Chinese actors are the world’s most active and persistent perpetrators of economic espionage.” Attorneys specializing in Chinese business practices and a real estate company representative told us that companies in China are likely to have ties to the Chinese government. In 2014, the Justice Department charged Chinese military hackers with cyber espionage against U.S. corporations and a labor organization for the purpose of gaining a commercial advantage—the first time that criminal charges have been filed against known state actors for hacking. Moreover, according to the Director of National Intelligence, China is the leading suspect in the cyber intrusion into the Office of Personnel Management’s (OPM) systems affecting background investigation files for 21.5 million individuals which OPM reported in July 2015. To acquire a building for federal agencies, GSA may work with the private sector to design and construct a building that the government then leases—which would give the construction firm access to the building’s structure. The security risk of having access to a building structure was evident in 1987 when the Senate Select Committee on Intelligence reported that “n 1985, the Committee received its first testimony indicating that there was strong evidence that the Soviets had succeeded in incorporating a complex and comprehensive electronic surveillance system into the structure of the new U.S. Embassy under construction in Moscow….” We reported in 1987 that the U.S. government contracted with a Soviet firm to construct the embassy building. Foreign-owned property located near federal facilities may also pose security risks. In 2014, we reported about DOD’s concerns over encroachment by foreign entities conducting business near its test and training ranges. We reported that foreign encroachment may provide an opportunity for surveillance of DOD test and training activities. Another potential risk to the government regarding foreign-owned leased space is the source of funds used to finance the projects. According to FinCEN, money laundering involves disguising financial assets so they can be used without detection of the illegal activity that produced them. Through money laundering, criminals transform the monetary proceeds derived from criminal activity into funds with an apparently legal source. According to the Federal Financial Institutions Examination Council’s Bank Secrecy Act/Anti-Money Laundering Examination Manual, a compilation of guidance developed by the federal banking agencies and FinCEN, once illegal funds are in the financial system, additional transactions are used to create the appearance of legality. These transactions further shield the criminal from a recorded connection to the funds by providing a plausible explanation for the source of the funds. Examples include the purchase and resale of real estate, investment securities, foreign trusts, or other assets. A 2015 State Department report on money laundering indicated that of 211 countries and jurisdictions, 67 are listed as being “of primary concern” regarding money laundering, including the United States, and 69 are listed as being “of concern.” The report indicated that economies in countries such as the United States that attract funds globally are vulnerable to money laundering activity because the volume and complexity of the available financial options may make criminals believe they may more easily hide their funds. In May 2016, the President announced steps to strengthen financial transparency and combat money laundering, corruption, and tax evasion, including a FinCEN rulemaking intended to strengthen customer due diligence requirements, in part, by requiring covered financial institutions to identify and verify the identity of beneficial owners. In the final rule, FinCEN discussed the importance of identifying beneficial owners in the context of assisting financial investigations by law enforcement. Specifically, FinCEN discussed a 2013 case in which New York prosecutors indicted 34 alleged members of Russian-American organized crime groups with having moved millions of dollars in unlawful gambling proceeds through a network of shell companies in Cyprus and the United States. In October 2015, GSA provided us with a list of all the space that the agency believed it was leasing from foreign owners. GSA indicated that this list, which included 17 leases, was compiled using information that lessors provided in SAM. Prior to November 1, 2014, GSA was not required to collect certain information from lessors through SAM, such as the parent, subsidiary, or successor entities to the lessor. All except one of these leases was entered into prior to November 1, 2014. We tried to validate the ownership through Real Capital Analytics’ real property database which indicated that 6 of the 17 leases were with foreign companies, 4 of which were of high-security space. We were unable to validate foreign ownership regarding the other 11 leases because (1) the database indicated that two of the buildings are not owned by foreign companies, (2) the database did not contain ownership information on many of the buildings with GSA-leased space, and (3) two leases on the list were no longer in effect. Based on our independent analysis using the real property database, foreign entities owned high-security space that GSA is leasing in 20 buildings through 25 leases as of March 2016. Our analysis indicated that this space was owned by 16 different foreign entities, 7 of which are based in non-NATO countries. However, the real property database did not include information on all of the buildings in which GSA leases high- security space. Therefore, the results of our analysis are likely understated and GSA may be leasing more high-security space than what we identified in the 25 leases. For example, we also found that a Japanese parent company ultimately owns a building in Washington, D.C., that was not in the database but which contains high-security space leased by DOJ that is not listed in table 1. According to Real Capital Analytics, its database shows information on the chain of title, which is the succession of title ownership to real property from the present owner back to the original owner, when available. We contacted, or attempted to contact, each company to confirm that the company owned the property and was based in the country identified in the database. In some cases, we were unable to reach the companies to confirm ownership, but reviewed other information that confirmed ownership such as leasing documentation or found that the buildings were part of the companies’ portfolios posted on their websites. In four cases, when we contacted the parties that were identified in the database as the owners or their representatives, we were told that the information was outdated—that they sold the buildings and no longer owned them—or that the database information was incorrect—that the buildings were not owned by foreign affiliates. We excluded those cases from our review. See table 1 for information on 20 of the 25 leases of high-security space that we identified as foreign-owned. When we found that the lessors were incorporated in the United States but their parent companies were based in foreign countries, we included them as foreign owned. In one case, because of the complexity of the transaction involving the purchase of a building containing high-security GSA-leased space, we were unable to determine in which country the immediate owner of the building was based. That property is noted in the table. We found that 26 different agencies and departmental components occupy high-security leased space in buildings that we identified as foreign owned, 22 of which occupy space that we identified as owned by companies based in non-NATO countries (China, Israel, South Korea, and Japan). For example, we identified eight leases of high-security space from Chinese companies entered into prior to November 1, 2014. These leases are for space occupied by the Drug Enforcement Administration (DEA), Secret Service, Social Security Administration, and GAO. GSA indicated that SAM did not contain information on lessors that listed physical or mailing addresses in China. The leases of space that we identified as being in foreign-owned buildings are occupied by agencies such as six FBI field offices, three DEA field offices, and two Social Security Administration offices. Because the tenants include intelligence and law enforcement agencies, this high-security space is used, among other things, for classified operations and storage of weapons, law enforcement evidence, and sensitive data. Examples of high-security leased space are shown in figure 2. Of the 25 leases, we found that the amount of space leased ranged from about 5,600 square feet to more than 800,000 square feet and that annual rent ranged from about $174,000 to about $24 million in 2016. We also found that 10 are high-value leases—those with a net annual rent above a threshold for which GSA is required to submit a prospectus, or proposal, to the House and Senate authorizing committees for their review and approval. The threshold for submitting a prospectus was $2.85 million for fiscal year 2014, the most recent threshold established. The total amount of space leased was about 3.3 million square feet at an annual cost of about $97 million. Nine of the 14 tenant agencies that we contacted indicated they were not aware that the space they were occupying was in buildings that we identified as being owned by foreign companies. For example, the Executive Office for United States Attorneys indicated that it has no records showing that GSA notified the office that a building it was occupying was foreign owned. The other five agencies that knew about occupying foreign-owned space had taken actions to mitigate the risk or were not concerned. Besides GSA, other agencies use their own statutory authority to lease space from foreign companies. For example, the State Department is leasing space for the U.S. ambassador to the United Nations in the Waldorf-Astoria Hotel in New York City, which was acquired by a Chinese company in 2014. We also found that the U.S. Mint, using its own authority, is leasing its headquarters building in Washington, D.C., from a Japanese parent company that is the ultimate owner. Several federal officials who assess foreign investments in the United States and selected real estate company representatives we spoke to told us that leasing space in foreign-owned buildings could present security risks such as espionage, unauthorized cyber and physical access to the facilities, and sabotage. For example, a DHS foreign investment official said that potential threat actors could coerce owners into collecting intelligence about the personnel and activities of the facilities when maintaining the property. The official said this situation could occur by direct observation or surreptitious placement of devices in sensitive spaces or on the telecommunications infrastructure of the facility. In addition, a DHS cybersecurity official said that advanced persistent cyber threats (adversaries possessing sophisticated levels of expertise and significant resources to pursue their objectives) tend to come from foreign sources. In addition, a representative from a real estate company said that foreign ownership could pose a cyber risk in buildings with data systems and sensitive information. Based on our analysis, interviews, and other information, we identified low, moderate, and high security risk levels associated with leasing space from foreign owners. At the lower level of risk, foreign entities that invested through real estate investment trusts (REIT) and other passive investments may be removed from accessing or managing the facilities. At the next level, foreign entities that have directly purchased the buildings may have access and operational control in the event that a lease or mitigation measure (discussed later in this report) does not exist to restrict such access. At the highest level, foreign entities that constructed the buildings could provide access to their structure and design, increasing the risk of nefarious action as demonstrated by the construction of the U.S. Embassy in Moscow and design-construction leased space, described earlier. Conversely, representatives from two real estate companies whom we interviewed said that it is not a security risk for the government to lease space in foreign-owned buildings or that the risks could be addressed. For example, one of the representatives said that access at high-security facilities is strictly controlled, including access by the owners, and that passive investors in properties do not have access to the buildings. A representative from a third real estate company said that it is not a security risk for the government to lease space in foreign-owned buildings because that company’s properties are managed by U.S. companies with no involvement from the passive investors. He said that passive investors have on rare occasions toured the properties, but they were subject to the agencies’ security clearance procedures. A representative from a fourth company said that foreign ownership is irrelevant when capital funds come from many investors that do not control the buildings. A representative from a fifth company said that people such as property managers, asset managers and building engineers, have more direct access to building systems and data than the owners and that they are subject to background checks and must be escorted in high-security buildings. He added that there could be cheaper ways to conduct nefarious action than by buying a building. Regarding the construction of new buildings, one company representative noted that construction contractors are vetted. A representative from a real estate association said that the federal government leasing space in foreign-owned buildings is not “in and of itself” a security risk. He said that foreign owners of U.S. real estate—including in some cases foreign governments—often will have meaningful, but noncontrolling, interests in that property which may give the foreign owner a sizable financial interest in the property’s leasing income and appreciation, but no involvement in the actual management or operation of the property. However, he added that the security risk may increase if the federal government is leasing from ownership entities controlled by companies from countries that are not allies of the United States. Some agencies that are occupying buildings that we identified as owned by companies based in non-NATO countries raised the following concerns: The Secret Service indicated that its counterintelligence branch determined that foreign ownership of a building it occupies could raise counterintelligence and security concerns. According to the Secret Service, the protection of its information, technology, personnel and space could be in jeopardy if the space were compromised through any unannounced inspections, emergency repairs to the building or any component within, the use of foreign nationals to provide any type of service, and any unescorted access throughout the space by the facility owner or representatives. Furthermore, the Secret Service indicated that the integrity and protection against potential compromise of the agency’s protection and intelligence information, criminal investigations and personal identifiable information would require implementing additional countermeasures to mitigate any threats and protect the agency’s operations as a result of occupying space in a building that we identified as being foreign owned. DEA indicated that foreign ownership raises security risks that should be mitigated. DEA’s primary concern is the possible unauthorized access to its secure areas and information. According to the agency, two important mitigation methods are ensuring that independent locksmiths are utilized to secure the office and that the security vendor is not affiliated with the owner. DEA also indicated that it would be useful for GSA to inform the agency about changes in ownership because this information would help its security assessment. DOJ, which has three agencies occupying a building that we identified as being foreign owned, indicated that it would conduct additional reviews before occupying space leased from a landlord under the ownership, control, or influence of a country that is not an ally of the United States or with which the United States has no diplomatic relations. DHS’s National Protection and Programs Directorate indicated that it has contacted GSA to identify any steps that it takes to assess the potential risk posed by a foreign-owned property and that in the future, DHS will use this information to assess space that GSA proposes that it occupy. By contrast, the Administrative Office of the United States Courts, another tenant in foreign-owned space, indicated that knowing that the building is foreign owned would have been immaterial in occupying the space because, to its knowledge, GSA does not consider whether a company is foreign when reviewing potential offers and awarding a lease. Similarly, four other tenant agencies occupying space in buildings that we identified as being owned by companies based in non-NATO countries—FBI, IRS, Social Security Administration, and the Treasury Department Inspector General for Tax Administration—indicated that foreign ownership of those buildings did not raise security concerns. Tenant agencies such as the Administrative Office of the United States Courts, the Social Security Administration, and Department of Veterans Affairs also emphasized that GSA selects the leased space, not the tenants. GSA leases of foreign-owned space generally restrict the owners from physically accessing the space except to maintain or inspect the facilities. According to GSA, every standard lease contains the same general restrictions on owner access without regard to the owner’s nationality. Specifically, the restriction states that “the Lessor may at reasonable times enter the premises with the approval of the authorized Government representative in charge.” Of the 11 buildings owned by companies based in non-NATO countries, the tenant agencies or the owners told us that the owners or their representatives had entered 8 of them, for example, for inspection purposes. If CFIUS has national security concerns about a covered transaction and does not believe those concerns can reasonably be addressed through the U.S. Government lease or other existing authorities, it may propose that the acquiring company enter into a mitigation agreement or impose conditions. Another potential risk to the government regarding foreign-owned leased space is the possibility of entering into leases with hidden beneficial owners of buildings that are using the investment to launder money. A 2006 FinCEN report found that hidden beneficial owners launder money through commonly reported entities, such as property management, real estate investment, realty, and real estate development companies. Furthermore, we have reported that money laundering and terrorist financing are crimes that can destabilize national economies and threaten global security. GSA checks whether potential lessors have sufficient funds to meet their lease obligations, but is not required to collect beneficial ownership information and therefore does not know the beneficial owners of the buildings it leases. However, federal internal control standards indicate that management should identify, analyze, and respond to risks related to achieving the defined objectives. When leasing space, GSA checks the Excluded Parties List System, which is a list of companies and individuals that are excluded from receiving federal contacts, and Treasury’s Specially Designated Nationals and Blocked Persons List, which is a list of individuals and companies whose assets are blocked and U.S. persons are generally prohibited from dealing with them. In leasing from foreign companies, GSA does not consider whether the lessors are “politically exposed persons,” which the Financial Action Task Force (FATF) defines as individuals who are or have been entrusted with a prominent function. According to FATF, many politically exposed persons hold positions that “can be abused for the purpose of laundering illicit funds or other predicate offenses such as corruption or bribery.” In 2010, the Senate examined how politically powerful foreign officials, their relatives, and close associates—politically exposed persons—have used the services of U.S. professionals and financial institutions to bring large amounts of suspect funds into the United States to advance their interests. Furthermore, in July 2016, DOJ announced the filing of civil forfeiture complaints seeking the forfeiture and recovery of more than $1 billion in assets, including real estate in New York and Los Angeles, associated with an international conspiracy to launder funds misappropriated from a Malaysian sovereign wealth fund. We found that commercially available screening software can be used to identify heightened risk individuals and organizations and mitigate risks associated with illicit funds, money laundering, fraud, organized crime, sanctions program violations, terrorist financing, among other risks. GSA’s lease of space for the FBI field office in Seattle may be an example of GSA leasing high-security space from a beneficial owner who is a politically exposed person. Our review found that the FBI field office in Seattle is ultimately owned by the Taib family of Malaysia through a series of domestic and foreign companies. Advocacy groups such as Global Witness allege that the Taib family has profited from corrupt practices in Malaysia. The lease was executed by Wallyson’s, a Washington state corporation, which is owned by Sakti International Corporation, a California corporation. According to a Dun & Bradstreet report in GSA’s leasing file, Sakti International Corporation is 100 percent financed by the Taib family of Malaysia. Furthermore, according to a 2008 document in the GSA leasing file, Sakto International, located in Canada, is the parent company of Sakti International Corporation. The lease was signed by Rahman Taib—the president, secretary, and chief financial officer of Wallyson’s—who is also the son of the former chief minister of Sarawak, Malaysia. We found no evidence that the family has been indicted or convicted of wrongdoing that would disqualify them from leasing to the government. However, Global Witness representatives told us that the government runs financial and non-financial risks as well as a reputational risk if it leases from individuals who have been accused of wrongdoing, regardless of whether they have been indicted or convicted. GSA and FBI officials said that they are not concerned about the ownership of the FBI field office in Seattle. According to GSA, “as long as the lessor performs according to the contract, additional concerns about ownership would not be raised.” FBI officials told us that the FBI does not have any concerns about either the physical or cyber security of the building or the sources of funding used to finance the building. The officials said that the owners may not enter the building. Our review of the lease for this building indicated that the government will have paid a total of $56 million in rent over the 20-year term ending in 2019. GSA officials said that leasing specialists must review the lists of excluded parties at least twice—after receiving offers and before awards. We asked GSA to provide evidence that it checked these lists with regard to the lease of space for the FBI field office in Seattle. Federal internal control standards indicate that documentation is a necessary part of an effective internal control system. However, GSA could not produce evidence that it had conducted these checks at those times. We did not find Taib family members on the Excluded Parties List or the Specially Designated Nationals and Blocked Persons List. GSA indicated that most lessors establish a separate entity—usually a limited liability corporation (LLC)—for each building. Private LLCs are not subject to the same public disclosure requirements as publicly traded companies. Representatives from a real estate LLC that leases many buildings to GSA told us that its investment capital comes from foreign sources that use financial institutions in the United States. Because the real estate LLC is privately-owned, we found no publicly available information about its investment sources. In May 2016, FinCEN issued final rules that would, in part, require covered financial institutions to identify and verify the beneficial owners of legal entity customers. According to the rulemaking, covered financial institutions are not presently required to know the identity of beneficial owners, “enabling criminal, kleptocrats, and others looking to hide ill- gotten proceeds to access the financial system anonymously.” Covered financial institutions must comply with the new rules by May 11, 2018. Also in May 2016, Treasury announced that it sent beneficial ownership legislation to Congress for consideration that, among other things, would require companies formed within the United States to file beneficial ownership information with the Treasury Department, and face penalties for failure to comply. The legislation would also require the Secretary of the Treasury to define “beneficial owner” for the purposes of implementing the proposed legislation. Separately, members of the House and Senate have independently introduced various pieces of beneficial ownership legislation. According to the sponsors, law enforcement efforts to investigate corporations and LLCs suspected of committing crimes such as money laundering have been impeded by the lack of available beneficial ownership information. In 2002, FinCEN temporarily exempted certain financial institutions, including persons involved in real estate closings and settlements, from the requirement to establish an anti-money laundering program that includes verifying customer identities. The exemption is still in place and, in 2015, advocacy organizations such as Global Financial Integrity, Global Witness, and the FACT Coalition urged FinCEN to remove the exemption. The organizations said that investors can mask the true ownership of property in the United States when the real estate purchase is made through anonymous companies, allowing millions of dollars to be invested in real estate transactions without detection. Global Financial Integrity, Global Witness, and the FACT Coalition representatives also told us that it is easy in the United States to create untraceable shell companies—which have no operations and can be used for illicit purposes such as laundering money. However, GSA officials said that they rely on due diligence processes conducted by real estate companies and banks to check the legitimacy of the funds that are used to finance the buildings that GSA leases. Representatives from a real estate company that uses foreign investments to finance buildings that are leased to GSA told us that they rely on banks and an independent global firm that provides fiduciary services to ensure that the sources of funds comply with applicable regulations. However, banking and real estate associations expressed different views on which parties are primarily responsible for checking the sources of funds used for commercial real estate. A representative from the American Bankers Association said that while it might seem reasonable for real estate companies to rely on banks (and for GSA to rely on real estate companies) to check the legitimacy of the funds that are used to finance real estate projects, banks do not always have sufficient information about the transaction. He said that when these transactions are put together, the lender may not have direct contact with the purchaser or the seller and if the purchaser or a seller is a corporation, the bank knows the corporation but not necessarily the details about the corporation, its structure or its management. The representative added that because the real estate company is dealing directly with the corporation as its client and has access to the individuals who can provide that information, the real estate company has a direct relationship and is in the best position to obtain any detailed information about the purchaser or seller. However, a representative from the Real Estate Roundtable, an association of real estate firms and associations, said that the many participants in the commercial real estate transactional process such as mortgage bankers, brokers, and title agents, are unlikely to have any significant and important information bearing on the possibility of money laundering activities given their function in commercial real estate transactions. He said that these participants are generally small businesses unequipped to deal with significant training on regulations, policing, audit and record keeping responsibilities. The Real Estate Roundtable representative also said that anonymity and liquidity—two characteristics important to money launderers—typically do not exist in real estate transactions because real estate transactions generally involve illiquid and visible assets. In addition, a representative from the National Association of Real Estate Investment Trusts told us there is a low risk of illegal foreign money being used to finance publicly traded REITs. He noted that because a publicly traded REIT is financed and operated in effectively the same manner as any other publicly traded company it is very unlikely it would be used as a mechanism to launder money. Furthermore, the Real Estate Roundtable representative said that because real estate is not a highly liquid asset and real estate transactions generally create a detailed “paper trail” of debt and equity investors, commercial buildings are not ideally suited to be money laundering vehicles. However, in its 2006 report on money laundering in the commercial real estate market, FinCEN stated that although real estate historically has been a relatively illiquid asset, money launderers may use real estate both as an investment and vehicle to store laundered funds. GSA’s leasing policies and procedures do not distinguish between leasing from domestic or foreign companies. When leasing space, GSA is required, among other things, to determine whether the prospective lessor is a responsible party. As discussed earlier, GSA officials said that this process includes, among other things, checking whether the entity has the financial means to fulfill the contract and assessing whether the building will be operated properly. However, under GSA’s Acquisition Manual, foreign ownership is not one of the factors that GSA must consider when deciding whether to contract for a lease. Offerors are required to disclose certain ownership information that may indicate whether they are foreign owned. The Homeland Security Act of 2002 authorizes GSA to protect federal facilities except those functions delegated to the Department of Homeland Security. Tenant agencies and FPS also have responsibility for protecting federal facilities. According to the Interagency Security Committee (ISC) standard on protecting federal facilities, tenant agencies and FPS are to conduct risk assessments for facilities with security levels III, IV, and V at least every 3 years. The standard also states that tenant agencies are responsible for making final facility security level determinations, must devise a risk management strategy, and, if possible, fund appropriate security countermeasures to mitigate the identified risk. As discussed earlier, GSA’s information on foreign ownership of high- security space was not reliable, and, as a result, tenant agencies lack information on such foreign ownership even though it can pose risks involving physical and cyber security and foreign financing. As discussed below, GSA’s existing procedures for obtaining information provide the agencies with some information on foreign ownership, but this information is incomplete. GSA officials said that they do not have the ability or authority to check foreign ownership beyond certain sources currently available to them. In addition, although GSA checks whether potential lessors have sufficient funds to meet their lease obligations, it does not check the lessors’ source of funds. As discussed below, various steps in the leasing process may disclose whether an offeror’s company is foreign owned as well as whether the company is owned by an immediate or highest level owner. However, these sources provide incomplete information on foreign ownership and foreign investment in space leased by GSA. In addition, GSA officials said that they do not validate the information on foreign ownership that contractors disclose in SAM. Lessors are required to self-disclose whether they are foreign owned. One way that GSA can identify a foreign company during the leasing process is when the lessor completes the representations and certifications form, which is part of the lease agreement, as required by the Federal Acquisition Regulation (FAR). When completing this form, the lessor is required to certify with respect to whether a taxpayer identification number is needed if it is a (1) “nonresident alien, foreign corporation, or foreign partnership that does not have income effectively connected with the conduct of a trade or business in the United States and does not have an office or place of business or a fiscal paying agent in the United States” or (2) “an agency or instrumentality of a foreign government.” GSA is required to check whether prospective lessors are barred from conducting business with the government. GSA’s Leasing Desk Guide requires GSA to ascertain whether the offeror has been disqualified or excluded from participating in federal contracts. As previously discussed, GSA indicated that it checks the Excluded Parties List System and Treasury’s Specially Designated Nationals and Blocked Persons List. We did not find any of the owners based in foreign countries listed in table 1 on the Excluded Parties List or the Specially Designated Nationals and Blocked Persons List. Companies are required to report information about their identities using various business codes. Under the FAR, an offeror must register with Dun & Bradstreet’s Data Universal Numbering System (DUNS), which are unique identifiers for business, and include the DUNS number when registering in SAM. Companies are assigned a Commercial and Government Entity (CAGE) code—an identification number assigned by the Defense Logistics Agency that is used within the federal government—to participate in SAM. Each entity (business, individual, or government agency) must register with SAM to conduct business with the federal government. Starting on November 1, 2014, the Federal Acquisition Regulation (FAR) began requiring offerors to provide additional ownership information through SAM, including, among other things, the “immediate” and “highest” level ownership of the offeror, and the CAGE or North Atlantic Treaty Organization CAGE (NCAGE) codes for these entities. “Immediate owner” means an “entity, other than the offeror, that has direct control of the offeror.” This definition includes “ownership or interlocking management, identity of interests among family members, shared facilities and equipment, and the common use of employees.” A highest level owner means “the entity that owns or controls an immediate owner of the offeror, or that owns or controls one or more entities that control an immediate owner of the offer.” Of the 8 lessors based in non-NATO countries that we identified from leases entered into prior to November 1, 2014, and thus not required to include immediate and highest level ownership information, 7 did not self-identify as foreign owners on their certifications and representations form. Our review of GSA’s lease inventory found that the business entity names are frequently building names or street addresses that do not reflect useful ownership information. FBI officials told us that GSA could contact the FBI if it had concerns about a particular foreign company, but declined to state what types of information it could provide to GSA. CFIUS has a limited role in identifying risks of GSA leasing from foreign companies. CFIUS officials said that, consistent with the scope of FINSA, CFIUS could not review GSA leasing from a foreign company unless a foreign person, as defined in CFIUS’s regulations, acquired control of a U.S. business that owned the building in which GSA was leasing space. Additionally, CFIUS could not review a foreign company’s construction of a building in the United States if the company did not acquire control of a U.S. business. During 2014 (the most recent available information) CFIUS conducted 147 reviews of covered transactions. Because GSA is not required to identify beneficial ownership information for the space it leases and because GSA is not informing tenant agencies when the space they are occupying is leased from foreign owners, tenants may not be aware that they are occupying space that is foreign owned and may not be addressing any security risks associated with foreign ownership. Because GSA is not identifying the beneficial owners of the properties it leases, it cannot check whether those owners raise any issues that may represent security risks to tenant agencies. GSA’s incomplete information and lack of policies and procedures regarding foreign ownership of high-security leased space may undermine the security of the tenants’ facilities. When GSA does not know the beneficial owners of the high-security properties that it is leasing, it lacks information that should be shared with its tenants for their facility risk assessments. Moreover, when tenant agencies lack information about the beneficial owners of their high-security facilities, they may not correctly evaluate the security risks and, consequently, not take the most appropriate steps to secure their buildings, leaving the facilities vulnerable, for example, to cyber intrusions. Our review found that GSA is leasing a small portion of its high-security leased space from foreign owners. However, because ownership information was not available regarding about one-third of the buildings with high-security leased space, GSA is likely leasing from more foreign companies than is readily identifiable. Because CFIUS’s authority is limited to reviewing foreign acquisitions that could result in control of a U.S. business, which rarely involves GSA-leased space, CFIUS has a limited role in identifying and mitigating risks of GSA leasing from foreign companies. As a result, GSA cannot rely on CFIUS to identify and mitigate these risks. As the leasing agent, GSA is in the best position to identify the beneficial owners of the high-security space that it leases and communicate the relevant information to its federal tenants so that they may adequately assess and mitigate any potential security risks associated with them. We recommend that the Administrator of the General Services Administration determine whether the beneficial owner of high-security space that GSA leases is a foreign entity and, if so, share that information with the tenant agencies so they can adequately assess and mitigate any security risks. We provided a draft of this report for review and comment to GSA, the departments of Defense (DOD), Energy (DOE), Homeland Security (DHS), Justice (DOJ), State, the Treasury, and Veterans Affairs (VA); the Administrative Office of United States Courts; the Federal Deposit Insurance Corporation (FDIC); the Office of the Director of National Intelligence (ODNI); the Securities and Exchange Commission (SEC); the Social Security Administration, and agencies which determined that the information about the foreign-owned buildings that they occupy is for official use only and is not included in this report. GSA provided written comments, reprinted in appendix I, agreeing with the report’s recommendation. DOD provided a letter, reprinted in appendix II, indicating that it had no comments on the report. ODNI provided a letter, reprinted in appendix III, indicating that ODNI and the Intelligence Community concur with the recommendation. The Social Security Administration provided a letter, reprinted in appendix IV, indicating that the report accurately reflects its activities regarding this review. DHS, DOJ, the Department of the Treasury, and the Social Security Administration provided technical comments which we incorporated as appropriate. The Administrative Office of the United States Courts, DOE, FDIC, SEC, the State Department, and VA had no comments. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Administrator of the General Services Administration, the Secretaries of Defense, Energy, Homeland Security, State, Treasury, and Veterans Affairs; the Attorney General; Director of the Administrative Office of the United States Courts, Director of National Intelligence, Chairman of the Federal Deposit Insurance Corporation, Chair of the Securities and Exchange Commission, Commissioner of the Social Security Administration, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or wised@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. In addition to the individual named above, Keith Cunningham, Assistant Director; Bob Homan, Analyst-in-Charge; Lisa Shibata; Camilo Flores; Tonita Gillich; and Michelle Weathers made key contributions to this report. | GAO has previously reported that federal facilities are vulnerable to threats from foreign sources that may target their information systems and affect the physical security of the occupants. GAO was asked to examine GSA's lease of high-security space from foreign owners. This report addresses (1) what is known about foreign ownership of high-security space leased by GSA, (2) potential risks posed by such foreign ownership, and (3) policies and procedures regarding GSA's leasing of space from foreign-owned entities. GAO reviewed GSA's leasing documents; identified and checked ownership information regarding high-security leased space to the extent possible using data, as of March 2016, from a firm that specializes in analyzing the commercial real estate market; interviewed GSA and federal foreign investment officials, tenant agencies that were occupying space owned by foreign entities, and five real estate companies that lease space to GSA or provide related services; and visited three foreign-owned high-security leased facilities selected to represent a variety of owners and tenants. GAO reviewed available information on the ownership of General Services Administration (GSA) leased space that requires higher levels of security protection based on factors such as mission criticality and facility size (high-security space) as of March 2016 and found that GSA is leasing high-security space from foreign owners in 20 buildings. The 26 tenant agencies occupy about 3.3 million square feet at an annual cost of about $97 million and use the space, in some cases, for classified operations and to store law enforcement evidence and sensitive data. The foreign-owned leased space included six Federal Bureau of Investigation field offices and three Drug Enforcement Administration field offices. GAO determined that the high-security space is owned by companies based in countries such as Canada, China, Israel, Japan, and South Korea. GAO was unable to identify ownership information for about one-third of GSA's 1,406 high-security leases as of March 2016 because ownership information was not readily available for all buildings. Federal officials who assess foreign investments in the United States and some tenant agencies occupying high-security leased space told GAO that leasing space in foreign-owned buildings could present security risks such as espionage and unauthorized cyber and physical access. However, 9 of the 14 tenant agencies GAO contacted were unaware that the space they occupy is in a building that we identified as foreign owned. The other five agencies that knew about occupying foreign-owned space had taken actions to mitigate the risk or were not concerned. Another risk is possibly entering into leases with hidden beneficial owners—the persons who ultimately own and control a building. According to the Treasury Department's Financial Crimes Enforcement Network, the risks of contracting with hidden beneficial owners include money laundering. GSA is not required to collect beneficial ownership information and therefore does not know the beneficial owners of the buildings it leases. Federal agencies are required to assess and address the risks to their high-security facilities but GSA does not inform tenants when leasing space from foreign owners. When leasing space, GSA is required, among other things, to determine whether the prospective lessor is a responsible party, but foreign ownership is not one of the factors that it must consider. As a result, tenants may be unaware that they are occupying foreign-owned space and not know whether they need to address any security risks associated with such foreign ownership. GAO recommends that GSA determine whether the beneficial owner of high-security leased space is a foreign entity and, if so, share that information with the tenant agencies for any needed security mitigation. GSA agreed with the recommendation. |
Our work was performed primarily using survey instruments to gather data on PP&E policies at 14 selected federal agencies and 12 private sector companies. Appendix II provides a list of all survey participants. We also held discussions with certain representatives of the participating federal agencies and private sector companies in developing the survey and in gathering follow-up information based upon the survey responses. We did not verify the accuracy of the data provided to us by the survey participants. We conducted our work from May 2001 through February 2002 in accordance with U.S. generally accepted government auditing standards. We requested comments on a draft of this report from the Department of the Treasury, OMB, all 14 federal agencies that participated in the survey, as well as the 12 private sector council member survey participants. Further details on our scope and methodology are included in appendix I. The Secretary of the Treasury, in coordination with the Director of OMB, is required to submit annually to the President and the Congress audited consolidated financial statements of the U.S. government beginning with those for fiscal year 1997. We are required to audit those statements. The principal financial statements required for federal agencies are the Balance Sheet, the Statement of Net Cost, the Statement of Changes in Net Position, the Statement of Budgetary Resources, and the Statement of Financing. These statements are to be prepared in accordance with U.S. generally accepted accounting principles. The balance sheet for the federal government presents the total balances of assets, liabilities, and net position as of a specific point in time. The government’s general PP&E, reported at almost $307 billion, net of accumulated depreciation as of September 30, 2001, represents approximately one-third of the assets on its balance sheet. Federal accounting standards, which agency CFOs use in preparing financial statements, are promulgated by the Federal Accounting Standards Advisory Board (FASAB). FASAB develops accounting standards after considering the financial and budgetary information needs of the Congress, executive agencies, other users of federal financial information, and the public. FASAB forwards the standards to the three principals—the Comptroller General, the Secretary of the Treasury, and the Director of OMB—for a review period, after which the standards are considered final, then published on FASAB’s Web site and in print. The American Institute of Certified Public Accountants recognizes the federal accounting standards promulgated by FASAB as being generally accepted accounting principles (GAAP) for the federal government. Currently, there are 22 SFFAS and three Statements of Federal Financial Accounting Concepts (SFFAC). The concepts and standards are the basis for OMB’s guidance to agencies on the form and content of their financial statements and the government’s consolidated financial statements. FASAB significantly relied upon SFFAC No. 1, Objectives of Federal Financial Reporting, in drafting accounting standards for PP&E. The two principle reporting objectives relevant to PP&E are operating performance and stewardship. In developing PP&E standards to meet the operating performance objective, FASAB established the goal of measuring the cost associated with using PP&E and including that cost in entity operating results. In seeking to fulfill the stewardship objective, FASAB developed the PP&E accounting standards to result in reporting information on (1) asset condition, (2) changes in the amount and service potential of PP&E, (3) the cost of PP&E where applicable, and (4) spending for acquisition of PP&E versus noncapital spending. Although FASAB established the reporting objectives framework in developing the PP&E accounting standards, it concluded that capitalization thresholds should be established by the federal entities themselves, based on their diversity in size and uses of PP&E. FASAB’s requirements in terms of establishing appropriate capitalization thresholds are that they be based on consideration of the entities’ financial and operational conditions, consistently applied, and disclosed in the financial reports. Before 1991, accounting principles, standards, and related requirements for executive agencies were published in appendix I of Title 2, “Accounting,” of the GAO Policy and Procedures Manual for Guidance of Federal Agencies, in accordance with 31 U.S.C. 3511. The capitalization threshold for federal agencies included in Title 2 was $5,000. In addition, under Federal Acquisition Regulations, government contractors are required to capitalize all assets costing $5,000 or more. Capitalization thresholds are tied to materiality as well, in that they generally are established at a level that would not omit a significant amount of assets from the balance sheet, which could materially misstate the financial statements of an entity or its components. There is not an authoritative standard issued in the private sector specifically addressing capitalization threshold levels. However, the underlying GAAP state that all normal expenditures of purchasing or constructing an asset and readying it for use are capitalized, thereby achieving the matching principle by distributing the costs of such assets to the periods benefited through depreciation. Capitalization thresholds are set at levels that would not approach materiality in any foreseeable circumstances. SFFAS No. 6 requires that depreciation expense be recognized on all general PP&E, except land and land rights of unlimited duration. Depreciation is the systematic and rational allocation of the costs of general PP&E to the operating periods benefiting from the asset, also referred to as the estimated useful life. FASAB again did not prescribe specific classifications of estimated useful lives. Instead, it requires that the useful life consider economic, environmental, and technological factors such as physical wear and tear and obsolescence. In the private sector, GAAP state that depreciation method and rate depend upon such factors as time, usage, maintenance policies, and asset obsolescence, and recognize certain prescribed methods for allocating the cost to the periods benefited. PP&E consists of tangible assets, including land, that have estimated useful lives of 2 years or more, are not intended for sale in the ordinary course of operation, and have been acquired or constructed with the intention of being used or being available for use by the entity. SFFAS No. 6, Accounting for Property, Plant, and Equipment, identifies four categories of PP&E: (1) general PP&E, (2) national defense PP&E, (3) heritage assets, and (4) stewardship land. General PP&E is used to provide general government services or goods and is reported on the balance sheet for federal financial reporting. National defense PP&E, heritage assets, and stewardship land are collectively referred to as stewardship PP&E and are reported in Supplementary Stewardship Information for federal financial reporting and are not included on the balance sheet or any other principal statement. FASAB has approved issuing a standard that would eliminate the category of national defense PP&E, and all items previously considered national defense PP&E would be classified as general PP&E. The focus of this report is on general PP&E, reported on the balance sheet of the U.S. government, which under current federal accounting standards does not include national defense PP&E. General PP&E consists of items that (1) could be used for alternative purposes but are used by the federal entity to produce goods or services or to support the mission of the entity, (2) are used in business-type activities, or (3) are used by entities in activities whose costs can be compared to other entities. SFFAS No. 6 requires that all general PP&E be recorded at cost, which shall include all costs incurred to bring the PP&E to a form and location suitable for its intended use. General PP&E includes land acquired for or in connection with other general PP&E and heritage assets, whose predominant use is general government operations. General PP&E is often classified into two main categories, personal and real property. Personal property includes vehicles, machinery, furniture, equipment, and software. Real property is land, buildings, and generally anything built or constructed on land, growing on land, or attached to the land. Capitalization threshold levels at federal agencies have risen significantly over the past 5 years. Over 60 percent of the agencies surveyed have at least doubled their capitalization thresholds in the past 5 years. Further, federal capitalization threshold levels are significantly higher than those reported by the 12 private sector entities we surveyed. The maximum federal capitalization threshold levels reported for personal and real property were much higher than those reported by the private sector companies. Inappropriate or excessive capitalization thresholds have a significant impact on financial reporting and related oversight issues and may not comply with SFFAS No. 6 requirements. Nine out of the 14 agencies surveyed reported that they had increased their capitalization thresholds in the past 5 years by 100 percent or more, for at least one category of property (excluding software). Figure 1 displays the increases by agency. Six agencies increased the capitalization threshold by 400 percent or more, 2 raising the threshold from $5,000 to $25,000, 2 raising the threshold from $5,000 to $100,000, 1 raising the threshold from $25,000 to $200,000, and 1 raising the threshold from $5,000 to $250,000. Appendix III provides detailed lists of the current capitalization level for both personal and real property by federal agency. Reasons given by the surveyed federal agencies for changing the capitalization thresholds included materiality, implementation of federal accounting standards, management decision, or external auditor recommendation. Twelve out of 14 federal agencies surveyed responded that they had performed some type of formal analyses or studies to develop or validate the capitalization threshold level. In some instances, the studies concluded that the threshold level was too low, prompting agency management to increase the capitalization threshold. Many agencies’ approaches to the capitalization threshold analyses involved applying varying threshold levels to PP&E balances to identify a capitalization level that resulted in a certain desired percentage of PP&E being captured on the balance sheet in relation to total PP&E. Although our survey asked for a brief description of the methodology used in the analyses, we did not request copies of the analyses from the federal agencies or assess the methodology or conclusions reached. We did note, however, that 5 of the 12 agencies solicited outside assistance in performing the analyses, and of those 5, 2 involved their respective offices of the inspector general. Although DOD was not included in this review, the department holds a significant portion of federal PP&E. DOD’s reported general PP&E holdings for fiscal year 2001 were $113.8 billion, net of accumulated depreciation, representing approximately 37 percent of the federal government’s PP&E reported on the U.S. government’s consolidated balance sheet. DOD’s capitalization threshold has risen from $5,000 in 1994, to $50,000 in 1995, and to $100,000 in 1996, which remains the current level. DOD had contractors perform a study to validate its capitalization thresholds and useful life policies for personal and real property. We reviewed the contractors’ work and agreed that certain limitations they cited in their reports—such as that the databases they analyzed may not have been appropriate, complete, and accurate—could directly affect the assessment of the adequacy of the capitalization threshold and useful life policies. The contractor recommended that DOD undertake similar periodic analyses in future years. Further, federal agency capitalization thresholds varied widely. They ranged from $0 to $250,000, excluding computer software, where the capitalization threshold ranged from $5,000 to $5 million. The lack of consistency in capitalization threshold levels among federal agencies could potentially lead to reporting problems in the U.S. government’s consolidated financial statements and performance measurement comparisons. For example, at some agencies major assets such as motor vehicles may be capitalized and at others they may not due to the varying capitalization threshold levels. As a result, the costs of vehicles used by certain agencies could be expensed in 1 fiscal year and not allocated to all the years benefiting from their use. Further, GAAP require that the capitalization threshold, including any changes in the threshold during the reporting period, be disclosed on the financial statements. Treasury has not disclosed the capitalization threshold used in the consolidated financial statements, or the fact that many reporting agencies have different capitalization thresholds. Despite the sharp increase in the capitalization threshold, almost all of the surveyed agencies responded that they maintained property records for PP&E not capitalized on the balance sheet, for purposes of safeguarding PP&E, supporting agency operations, or fulfilling external reporting requirements. For example, all surveyed agencies indicated that they have policies and procedures in place, such as bar coding and periodic inventories, for safeguarding and maintaining accountability over pilferable and sensitive items. We did not evaluate the adequacy of the design of the agency policies and procedures or the effectiveness of the controls or their implementation. Even though 13 of the 14 federal agencies in our survey reported that they maintain property records for PP&E not capitalized on the balance sheet, most were unable to provide the cumulative value of PP&E recorded in property records but not capitalized on the balance sheet as of the end of fiscal year 2000. Only the National Aeronautics and Space Administration (NASA) and the Federal Aviation Administration (FAA) responded with the cumulative value of PP&E not reported on their balance sheets as of September 30, 2000—approximately $4.9 billion and $1.6 billion, respectively. However, we noted that NASA’s auditors for fiscal year 2001 reported a material weakness related to PP&E, so the amount NASA reported as being expensed may not be reliable. Federal capitalization thresholds are significantly higher than those reported by the 12 private sector entities we surveyed. We found that the agency capitalization thresholds for personal property ranged from $3,000 to $200,000, and in some cases were 40 times higher than the maximum levels reported by the private sector participants. Table 1 compares the ranges of capitalization thresholds noted at the federal agencies surveyed to those of the private sector participants. Appendix IV provides the specific personal property responses for all survey participants by category. As shown in table 1, private sector respondents’ threshold levels for personal property ranged from as low as $250 up to $5,000. Under these threshold levels, office equipment costing $20,000 with an estimated useful life of 5 to 7 years, would not be capitalized at more than half (9 out of 14) of the federal agencies surveyed, but would be capitalized at all of the private sector company participants. Five of the surveyed federal agencies responded as having a separate capitalization threshold level for bulk purchases. A bulk purchase policy generally refers to capitalization guidelines when acquiring significant asset quantities in bulk at one time, where the individual unit price falls below the original threshold. For example, the National Oceanic and Atmospheric Administration’s (NOAA) policy is to capitalize a bulk procurement of $1 million or more for personal property with a unit price from $25,000 to its individual capitalization threshold of $200,000, if the items are identical. The Department of Education has a $500,000 bulk purchase policy, and the Social Security Administration (SSA) has a $10 million bulk purchase policy for computer hardware and software. Personal computers acquired individually would not be capitalized at many surveyed federal agencies under the current capitalization threshold levels, and bulk purchases of personal computers would have to rise to the capitalization threshold level, or higher at some agencies, as noted above, to be capitalized on the balance sheet. The remaining nine federal agencies responded that the capitalization threshold levels apply to both single item and bulk purchases of PP&E, as did the majority of the private sector companies surveyed. However, the few private sector respondents with bulk purchase policies indicate an emphasis on capitalizing assets and minimizing the impact on net income. For example, Pfizer responded that acquisitions of multiple like items would be capitalized if they exceed $10,000 in the aggregate, even though each item is under its $1,000 threshold level. Certain federal agencies in our survey, as well as some private sector companies, reported capitalization thresholds specifically for software, classified as personal property on the balance sheet. The agencies’ capitalization thresholds for software ranged from $5,000 to $5 million, or 20 times higher than the maximum level reported by the private sector participants. The private sector respondents that reported specific capitalization threshold levels for software indicated ranges from $1,000 to $250,000. Figure 2 displays the percentages of surveyed federal agencies and private sector companies at each capitalization threshold level for software. As shown in figure 2, 92 percent of the federal agencies surveyed have threshold levels greater than $10,000 for software, compared to the private sector, with 25 percent of respondents in that category. The majority of the private sector software capitalization threshold levels were $10,000 and below. 75% SFFAS No. 10, Accounting for Internal Use Software, effective for reporting periods after September 30, 2000, was cited by several federal agencies as the basis for establishing a separate threshold just for software or increasing their capitalization threshold levels for software. SFFAS No. 10 requires the capitalization of the full cost (direct and indirect) of internal use software whether it is commercial-off-the-shelf, contractor developed, or internally developed. A specific capitalization threshold for software, separate from the threshold for all other personal property, may be warranted at many agencies due to the varying and incremental nature of the costs that go into software development, such as salaries. However, the threshold level for software also varies quite significantly among the federal agencies surveyed, which could result in the consolidation and comparison problems discussed previously. Appendix V provides the specific software responses for all survey participants. Capitalization thresholds for real property ranged from zero, or no threshold, indicating that all such assets are capitalized on the balance sheet, to $250,000, which is 50 times the highest level reported by the private sector participants. Table 2 displays the capitalization threshold ranges for real property at the federal agencies surveyed compared to those in the private sector. Appendix VI provides the specific real property responses for all survey participants by category. Private sector respondents’ threshold levels for real property ranged from $0 to $5,000. Under these threshold levels, a building costing $95,000 with an estimated useful life of 30 years would not be capitalized at some federal agencies surveyed, but would be capitalized at all of the private sector company participants. For example, Gillette, a large corporation with over $10 billion in assets, has a uniform threshold level of $2,500 for both real and personal property, with the exception of software. As shown in tables 1 and 2, the threshold levels in the private sector for most personal property and real property are relatively low, and more consistent with the $5,000 capitalization threshold level previously established for federal agencies in Title 2 of the GAO Policy and Procedures Manual for Guidance of Federal Agencies, in accordance with 31 U.S.C. 3511. In addition, under Federal Acquisition Regulations, government contractors are required to capitalize all assets costing $5,000 or more. Our survey work was not designed to conclude on the reasonableness of the capitalization threshold levels being applied at the federal agencies or the private sector companies. However, the widely varying threshold levels among the federal agencies, the sharp increases in recent years, and the differences from private sector companies of considerable size in terms of reported PP&E and total assets raise some concerns. Inappropriate or excessive capitalization thresholds have a significant impact on financial reporting and related oversight issues and may not comply with SFFAS No. 6 requirements to capitalize all items that meet certain characteristics, such as a useful life of 2 years or more. FASAB believed that not specifying a threshold level, and allowing agencies broad latitude in establishing capitalization thresholds suited to their respective financial and operational conditions, would lead to a more cost-effective application of the accounting standard. However, objectives outlined by FASAB in the SFFAC No. 1, such as (1) stewardship responsibility, (2) capturing the full cost of operations, and (3) reliable financial reporting, may not be met as a result of the wide range and significant increase in threshold levels that we identified in our survey. Excessively high capitalization thresholds reduce the amount of federal assets that are reported on the balance sheet, distorting financial reporting by potentially jeopardizing the matching of costs to the appropriate period of asset utilization. For example, in February 2000, we reported that an inappropriate capitalization threshold contributed to a material understatement in PP&E of approximately $1 billion, representing about 77 percent of the IRS’s total PP&E balances as of September 30, 1999. IRS had been following the Department of the Treasury’s standard $50,000 capitalization criterion, and now capitalizes most property and equipment regardless of the dollar amount, based upon the capitalization issues raised as a result of our financial audit of IRS. Further, six of the agencies surveyed reported that they expensed a total of almost $2 billion in PP&E for the fiscal year ended September 30, 2000, and therefore did not report this amount on the U.S. government’s consolidated balance sheet as assets. While the $2 billion is a relatively small amount compared to the total PP&E or total assets reported on the consolidated financial statements of the U.S. government, this amount is incomplete, as the remaining eight agencies surveyed could not readily provide the amount expensed for the same period as a result of the PP&E acquisition costs not meeting the capitalization threshold. An assessment of whether the capitalization threshold has a material effect on financial reporting is difficult to make if agencies cannot provide the amount of assets that does not meet the capitalization threshold and is therefore expensed in a given year. Interestingly, one of our surveyed federal entities, USPS, which sets its rates and fees to recover its costs, reported a $3,000 personal property capitalization threshold and $5,000 real property threshold, which is more in line with the surveyed private sector companies’ capitalization threshold levels. For example, Exxon Mobil Corporation, a global company with net PP&E of $90 billion as of December 31, 2000, also has a capitalization threshold of $3,000, excluding software. In fact, in looking at capitalization threshold levels for personal property in the private sector excluding software, all but 2 of the 12 participants had a threshold of $3,000 or less, and the remaining 2 had a $5,000 threshold. As reported in our High-Risk Series, some federal entities do not yet have reliable financial and operational information to measure performance based on the costs of providing goods and services and therefore appear to have little incentive to maintain assets on the balance sheet through lower capitalization thresholds. For example, we reported in January 2001 that the Department of Agriculture (USDA) lacked financial accountability over billions of dollars of assets. FAA’s financial management was also designated as high risk because of serious and long-standing accounting and financial management weaknesses, including property system issues. Reliable information on the costs of federal programs and activities, of which PP&E is a major factor, is crucial for effective management of government operations. Useful lives for personal property ranged from 2 to 40 years among the surveyed federal agencies, but include a wide array of assets. Upon comparing the recovery periods for like assets, the range narrows. For example, the useful lives for motor vehicles ranged from 3 to 12 years in the federal agencies surveyed. Useful lives for real property at the surveyed federal agencies ranged from 5 to 100 years, which is a wide range that encompasses numerous and varying types of real property. The federal government’s real property is quite diverse, and includes items such as office buildings, dams, laboratories, courthouses, postal facilities, and embassies. However, when comparing the useful lives for similar buildings or structures across the federal government, the recovery periods are similar. For example, 12 out of the 14 surveyed federal agencies indicated useful life classifications for buildings of 30 to 40 years. The useful life policies within the federal government were generally similar to those found in the private sector. No significant differences were noted between the federal government and the private sector survey respondents in the useful life policies for certain personal property categories such as equipment, furniture and fixtures, motor vehicles, and software. The ranges of useful life classifications for both federal and private sector company participants are shown in table 3 for personal property and in table 4 for real property. Appendixes VII and VIII detail the useful life ranges for personal and real property for each survey participant. As noted above, the useful life ranges by category at the surveyed federal agencies are similar to those at the surveyed private sector companies. The high useful life of 40 years for equipment pertains to certain items at the Department of Energy (DOE), such as compressors and metal tanks. If such equipment were excluded, the maximum useful life classification for equipment at the surveyed federal agencies would be 25 years—identical to that in the private sector. The few differences that we identified between federal agency and private sector useful lives are due to the different types of assets owned by the survey participants rather than any systemic differences in the useful life policies. For example, the maximum useful life classification for buildings was reported as 75 years by two surveyed agencies, the Department of the Interior’s Bureau of Reclamation and USPS. At the Bureau of Reclamation, the building useful life range of 30 to 75 years was reported for service facilities, which consist of houses, buildings, garages, and shops owned by the bureau and used in electric, irrigation, municipal and industrial, or multipurpose operations and are not included in the plant accounts of a specific project. At USPS, the 75-year building useful life was reported only for pre-July 1970 monumental (indicating stone or stone ornamentation) buildings. Other than these two specific classifications within the buildings category, the maximum useful life classification for buildings at the surveyed federal agencies would be 50 years, and identical to that in the private sector. Similarly, the 100-year useful life for other structures and facilities at the surveyed federal agencies was for dams and related property at the Bureau of Reclamation. The longest useful life reported by the private sector respondents is 40 years for other structures and facilities, but none of the private sector respondents reported an asset similar to a dam. We did not receive survey results from any private sector utility companies for comparison purposes because we were limited to the Private Sector Council (PSC) members that voluntarily participated in our survey. Adequate useful life classifications also serve as a mechanism to achieve fair presentation of an entity’s financial position and results of operations in accordance with GAAP. New additions to PP&E that replace old or obsolete assets generally occur as the useful lives of the older assets are reaching completion, and the typical financial statement impact of the removal of an almost fully depreciated asset or a fully depreciated asset is minimal to none. However, if the useful life assigned to an asset or a class of assets does not reflect its actual service life, then the financial statement impact could be greater. For example, if an asset is assigned a useful life that exceeds its actual service life, the preliminary result would be an overstatement on the agency’s balance sheet and an understatement on its statement of net cost for a period. Conversely, if an asset’s designated useful life were lower than its actual service life, the preliminary result would be an understatement on the agency’s balance sheet and an overstatement on its statement of net cost for a period. Our survey results identified widely varying capitalization threshold levels, sharp increases in recent years, and significant differences from private sector companies of comparable size. Because capitalization thresholds may have a significant effect on the consolidated financial statements of the U.S. government, this survey was designed as the first step in providing baseline information to analyze these significant PP&E policies and assess their impact on the financial reports of the U.S. government. In addition, agency management and auditors also have continuing responsibilities to ensure that established capitalization threshold levels are appropriate. These issues are especially critical for agencies that establish user fees based on actual costs and will become even more important as the government moves toward matching revenues and costs for performance measurement purposes. The information obtained as a result of our survey can be used as a tool for further analysis and assessment of these issues. We provided a draft of this report to 14 federal agencies and 12 private sector companies that participated in our survey, as well as to the Department of the Treasury and OMB. We received comments from Treasury, OMB, and 4 of the 14 federal agencies surveyed, including USDA, the Department of the Interior (Bureau of Reclamation), NASA, and the Department of State (see appendixes XI through XV). Seven federal agencies, including the departments of Education, Energy, Justice (Bureau of Prisons (BOP)), Transportation (FAA), and Veterans Affairs (VA), as well as the Tennessee Valley Authority (TVA) and USPS, provided primarily editorial comments, which we have incorporated into the report as appropriate. The remaining 3 agencies, which include the Department of Commerce (NOAA), the General Services Administration (GSA), and SSA, reviewed a draft of this report and told us they had no comments. Three PSC members that participated in the survey, Allstate, McGraw-Hill, and PPG Industries, also provided primarily editorial comments, which we have incorporated into the report as appropriate. The substantive comments we received from the Treasury, OMB, and federal entities had a common theme, in that they all generally took issue with comparing the capitalization threshold levels in the federal government to those in the private sector. For example, the Department of the Treasury stated that the private sector has income tax considerations that affect capitalization thresholds, but these are not an issue at federal agencies. In this regard, our private sector survey instrument (see appendix X) recognized this consideration by specifically asking for information regarding practices for financial reporting, or book purposes, and not for income tax reporting. USDA referred to inherent differences between the government and the private sector in reporting cost and income. NASA and the State Department commented that the report did not acknowledge the private sector’s profit objective, which they viewed as the main force behind its PP&E policies and practices, as distinctly different from the financial reporting objectives of the U.S. government. Our views on asset capitalization are based upon two fundamental accounting concepts: the matching principle and materiality. The matching principle aims to assign costs to the proper period. In the case of capital assets, this is done through depreciation to recognize the use of the asset and can only occur if the asset is capitalized and not totally expensed when placed in service. The concept of materiality overlays the matching principle to provide relief from capitalizing and tracking assets that are immaterial to an entity’s financial statements. The establishment of a capitalization threshold policy must be supported by a detailed analysis, anchored by these two fundamental principles of matching and materiality. Furthermore, capitalization thresholds should be periodically reevaluated to help ensure their continuing relevance. Our report provides baseline data that we believe could be useful to federal agencies in analyzing capitalization thresholds. For example, NASA’s reported total assets as of September 30, 2000, were $34.5 billion, similar to Pfizer’s reported $33.5 billion at its fiscal year-end of December 31, 2000. NASA’s capitalization threshold for both real and personal property is $100,000 compared to Pfizer’s $1,000 threshold. Further, NOAA’s reported total assets at September 30, 2000, were $5.5 billion, similar to McGraw-Hill’s reported $4.9 billion at its fiscal year-end of December 31, 2000. NOAA’s capitalization threshold for both real and personal property is $200,000 compared to McGraw-Hill’s $2,000 threshold. Our survey work was not designed to conclude on the reasonableness of the capitalization threshold levels being applied at the federal agencies or the private sector companies nor do we draw any conclusions. However, the widely varying threshold levels among the federal agencies, the sharp increases in recent years, and the large differences from private sector companies of considerable size in terms of reported PP&E and total assets are issues that we plan to review further. Inappropriate or excessive capitalization thresholds can have a significant impact on financial reporting by reducing the amount of federal assets that are reported on the balance sheet and by jeopardizing the matching of costs to the appropriate period of asset utilization. While certainly differences exist between federal financial reporting objectives and those in the private sector, there are similarities as well. Both federal financial statements and those of private sector companies seek to provide reliable, useful, and timely information to their users. Similar to private sector companies’ responsibility to fairly state profits or net income, federal entities have a responsibility to fairly state the net cost of operations. This is also important in determining fees to be charged, in other efforts to recoup costs through any reimbursement arrangement, and in the ability to match costs with performance. OMB’s comments also included similar concerns related to comparing federal capitalization threshold levels to those in the private sector. In addition, OMB noted that a comparison to the capitalization threshold levels of state and local governments would be informative, and referred to a recent survey of state comptrollers, done by the National Association of State Comptrollers (NASC). The results of the survey were reported in the July 2002 newsletter of the National Association of State Auditors, Comptrollers and Treasurers. While the NASC survey and its reported results appeared after we had completed our fieldwork, OMB felt strongly that the survey and its results should be mentioned in our report. Although a review of the NASC survey was not within the scope of our work, we noted that a significant portion of the states participating in the survey reported a $5,000 threshold, which is also the threshold required by the federal government for grant recipients’ recovery of costs under OMB Circular A-87. These thresholds, particularly those for personal property, are in line with those used by most of the private sector survey participants. Federal Acquisition Regulations also require federal government contractors to apply a capitalization threshold not to exceed $5,000. OMB, Treasury, and the Bureau of Reclamation took issue with our statement that the lack of consistency in capitalization threshold levels among federal agencies could potentially lead to reporting problems in the U.S. government’s consolidated financial statements and performance measurement comparisons. As stated in our report, individual capitalization threshold levels are permissible under federal accounting standards, and because each federal agency was established with a specific mission, they may possess unique assets to achieve their respective goals. At the same time, consistent treatment of like assets is critical to accurate performance measurement and reliable, relevant consolidated financial reporting. Management has a responsibility to ensure that the financial statements are fairly stated, in all material respects, and the auditor’s role is to provide an opinion on that basic assertion, based on its work. As the auditor of the U.S. government’s financial statements, we must ensure that the varying capitalization thresholds do not result in or contribute to a material misstatement at the consolidated level. The results of our survey can provide useful baseline data to OMB and Treasury in their respective roles, and to agencies and their auditors as they continue to periodically assess the adequacy of the capitalization threshold in terms of material impact on financial reporting. We are sending copies of this report to the Chairman and Ranking Minority Member, Senate Committee on Governmental Affairs; the Chairman and Ranking Minority Member, House Committee on Government Reform; the Chairman and Ranking Minority Member, Subcommittee on Government Efficiency, Financial Management and Intergovernmental Relations, House Committee on Government Reform; and other interested congressional committees. We are also sending copies to the Chief Financial Officers, the Inspectors General and other interested parties, including the survey participants, the Private Sector Council, and the Chairman of the Federal Accounting Standards Advisory Board. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions on this report, please contact me at (202) 512-9505 or Mary Arnold Mohiyuddin at (202) 512-3087. The objectives of this report were to determine (1) what are the federal government’s current capitalization threshold practices for PP&E and how such federal government policies compare to those practices being applied to PP&E in the private sector and (2) what are the useful life policies within the federal government and how they compare to those used in the private sector. To fulfill these objectives, we developed two surveys, one for federal agencies and one for private sector companies. The surveys were used to collect information on capitalization thresholds and useful life policies, studies or analyses supporting those policies, and other related data that would assist us in determining the rationale for these PP&E policies as well as give us an indication of any differences or similarities between federal practices and private sector practices in the PP&E policy area. For the federal government, we considered the 24 federal agencies responsible for annual audited financial statements as required under the CFO Act as expanded by the Government Management and Reform Act of 1994. We sent the survey to federal agencies with a reported $4 billion or more of net PP&E at September 30, 2000, except for DOD. For four agencies (Commerce, Interior, Justice, and Transportation), we surveyed a single component of the entire department due to the significant number of reporting components or because of the possibility of differing PP&E accounting policies for the various components. For each of those four agencies, we selected the component with the largest percentage of the total reported PP&E for that department. In addition, we randomly selected two federal agencies with net PP&E as of September 30, 2000, well below $4 billion to participate in the survey. We received completed surveys from all 14 federal agencies we contacted. Although DOD is the largest holder of PP&E in the federal government, we chose not to include it in this survey. DOD had a study performed by contractors to validate its capitalization thresholds and useful life policies for personal and real property. We reviewed the contractors’ work and agreed that certain limitations they cited in their reports pertaining to the reliability and completeness of the data could directly affect the assessment of the adequacy of the capitalization threshold and useful life policies. Nonetheless, the reported value of net PP&E for federal survey participants represents over half of the federal government’s reported net PP&E as of September 30, 2000. Appendix II lists all agency survey participants. We also surveyed member companies of the PSC, a nonprofit, nonpartisan public service organization committed to helping the federal government improve its efficiency, management, and productivity through cooperative sharing of knowledge. We sent our survey to all member companies, approximately 40, and received completed surveys from 12 PSC members. Appendix II contains the complete list of PSC survey participants. We did not audit or verify the information provided by the federal agency or private sector survey participants in any way. We summarized the data collected from both survey groups, as reported to us by the respondents. We conducted telephone interviews with personnel at certain agencies and PSC members for follow-up questions or clarification purposes as needed. The practical difficulties of conducting any survey may introduce errors, commonly referred to as nonsampling errors. For example, difficulties in how a particular question is interpreted, in the sources of information that are available to respondents, or in how the data are entered into a database can introduce unwanted variability into the survey results. We took steps in the development of the questionnaires, the data collection, and the data editing and analysis to minimize the nonsampling errors. For example, we pretested the questionnaires with a number of respondents to refine the survey instruments, we edited the surveys and called respondents to clarify answers, and we verified a sample of the survey data that was entered into our database for any keypunch errors. We reviewed GAAP and concepts that related to PP&E accounting, as well as federal reporting guidelines issued by Treasury and OMB. In addition, we reviewed the financial statements and related notes to the financial statements of the federal agencies and private sector companies that participated in the survey. We performed our work from May 2001 through February 2002 in accordance with generally accepted government auditing standards. We provided a draft of this report to 14 federal agencies and 12 private sector companies that participated in our survey, as well as to the Department of the Treasury and OMB. We received comments from Treasury, OMB, and 4 of the 14 federal agencies surveyed, including USDA, the Department of the Interior (Bureau of Reclamation), NASA, and the Department of State. Seven federal agencies, including the departments of Education, Energy, Justice (BOP), Transportation (FAA), and Veterans Affairs, as well as TVA and USPS, provided primarily editorial comments, which we have incorporated into the report as appropriate. The remaining 3, which include the Department of Commerce (NOAA), GSA, and SSA reviewed a draft of this report and told us they had no comments. Three PSC members that participated in the survey, Allstate, McGraw-Hill, and PPG Industries, also provided primarily editorial comments, which we have incorporated into the report as appropriate. Net PP&E/total assets (percentage) Bureau of Prisons (DOJ) Bureau of Reclamation (DOI) Energy (except PMA PP&E) FAA (DOT) NOAA (Commerce) Net PP&E/total assets (percentage) Meredith Corp. Capitalization threshold prior to current level not reported by agency, therefore no increase calculated. Capitalization threshold prior to current level not reported by agency, therefore no increase calculated. TVA was not included in the above because it capitalizes entire projects instead of individual assets. Meredith Corp. TVA was not included in the above because it capitalizes entire projects instead of individual assets. Also, it has no specific software threshold. Meredith Corp. Federal agency did not report a capitalization threshold for this property category. Meredith Corp. Meredith Corp. Meredith Corp. Staff members making key contributions to this report were Linda J. Brigham, Amy C. Chang, Francine M. DelVecchio, Cleggett S. Funkhouser, Stuart M. Kaufman, David C. Merrill, and Lisa M. Warde. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to daily E-mail alert for newly released products” under the GAO Reports heading. | In passing the 1990 Chief Financial Officers Act and a range of other financial management reform legislation, Congress has sought to overcome the historical lack of reliable, useful, and timely information with which to make informed decisions, measure and control costs, manage for results, and ensure financial accountability on an ongoing basis. Reported capitalization threshold levels at the 14 agencies GAO surveyed ranged from zero to $250,000. Despite the sharp increase in the capitalization threshold, all but one of the 14 agencies responded that they maintained property records for the government's general property, plant, and equipment (PP&E) not capitalized on the balance sheet, citing safeguarding of PP&E and supporting agency operations as the key reasons for maintaining such information. Federal capitalization thresholds are significantly higher than those reported by the private sector entities GAO surveyed. In some cases, the federal capitalization thresholds for real property were up to 50 times higher than those noted in the private sector. In contrast to the wide variance between federal agency and private sector capitalization threshold policies, federal agency useful life policies were generally similar to those found in the private sector. Estimated useful life classifications within the federal government ranged from 2 years to 40 years for personal property and 5 years to 100 years for real property. GAO did identify several differences attributable to the variety of assets owned by the entities that participated in its survey, rather than any systematic differences in useful life classifications. |
FECA is administered by Labor’s Office of Workers’ Compensation Programs (OWCP) and currently covers more than 2.7 million civilian federal employees from more than 70 different agencies. FECA benefits are paid to federal employees who are unable to work because of injuries sustained while performing their federal duties. Under FECA, workers’ compensation benefits are authorized for employees who suffer temporary or permanent disabilities resulting from work-related injuries or diseases. FECA benefits include payments for (1) loss of wages when employees cannot work because of work-related disabilities due to traumatic injuries or occupational diseases; (2) schedule awards for loss of, or loss of use of, a body part or function; (3) vocational rehabilitation; (4) death benefits for survivors; (5) burial allowances; and (6) medical care for injured workers. Wage-loss benefits for eligible workers with temporary or permanent total disabilities are generally equal to either 66- 2/3 percent of salary for a worker with no spouse or dependent, or 75 percent of salary for a worker with a spouse or dependent. Wage-loss benefits can be reduced based on employees’ wage-earning capacities when they are capable of working again. OWCP provides wage-loss compensation until claimants can return to work in either their original positions or other suitable positions that meet medical work restrictions. Each year, most federal agencies reimburse OWCP for wage-loss compensation payments made to their employees from their annual appropriations. If claimants return to work but do not receive wages equal to that of their prior positions—such as claimants who return to work part- time—FECA benefits cover the difference between their current and previous salaries. Currently, there are no time or age limits placed on the receipt of FECA benefits. With the passage of the Federal Employees’ Compensation Act of 1916, members of Congress raised concerns about levels of benefits and potential costs of establishing a program for injured federal employees. As Congress debated the Act’s provisions in 1916 and again in 1923, some congressional members were concerned that a broad interpretation threatened to make the workers’ compensation program, in effect, a general pension. The 1916 Act granted benefits to federal workers for work-related injuries. These benefits were not necessarily granted for a lifetime; they could be suspended or terminated under certain conditions. Nevertheless, the Act placed no age or time limitations on injured workers’ receipt of wage compensation. The Act did contain a provision allowing benefits to be reduced for older beneficiaries. The provision stated that compensation benefits could be adjusted when the wage- earning capacity of the disabled employee would probably have decreased on account of old age, irrespective of the injury. While the 1916 Act did not specify the age at which compensation benefits could be reduced, the 1949 FECA amendments established 70 as the age at which a review could occur to determine if a reduction were warranted. In 1974, Congress again eliminated the age provision. Typically, federal workers participate in one of two retirement systems which are administered by the Office of Personnel Management (OPM): the Civil Service Retirement System (CSRS), or the Federal Employees’ Retirement System (FERS). Most civilian federal employees who were hired before 1984 are covered by CSRS. Under CSRS, employees generally do not pay Social Security taxes or earn Social Security benefits. Federal employees first hired in 1984 or later are covered by FERS. All federal employees who are enrolled in FERS pay Social Security taxes and earn Social Security benefits. Federal employees enrolled in either CSRS or FERS also may contribute to the Thrift Savings Plan (TSP); however, only employees enrolled in FERS are eligible for employer matching contributions to the TSP. Under both CSRS and FERS, the date of an employee’s eligibility to retire with an annuity depends on his or her age and years of service. The amount of the retirement annuity is determined by three factors: the number of years of service, the accrual rate at which benefits are earned for each year of service, and the salary base to which the accrual rate is applied. In both CSRS and FERS, the salary base is the average of the highest three consecutive years of basic pay. This is often called “high-3” pay. According to CRS, an injured employee cannot contribute to Social Security or to the TSP while receiving workers’ compensation because Social Security taxes and TSP contributions must be paid from earnings, and workers’ compensation payments are not classified as earnings under either the Social Security Act or the Internal Revenue Code. As a result, the employee’s future retirement income from Social Security and the TSP may be reduced. Legislation passed in 2003 increased the FERS basic annuity from 1 percent of the individual’s high-3 average pay to 2 percent of high-3 average pay while an individual receives workers’ compensation, which would help replace income that may have been lost from lower Social Security benefits and reduced income from TSP. Concerns that beneficiaries remain in the FECA program past retirement age have led to several proposals to change the program. Under current rules, an age-eligible employee with 30 years of service covered by FERS could accrue pension benefits that are 30 percent of their high-3 average pay and under CSRS could accrue almost 60 percent of their high-3 average pay. Under both systems benefits can be taxed. By contrast, FECA beneficiaries can receive up to 75 percent of their preinjury income, tax-free, if they have dependents and 66-2/3 percent without dependents. Because returning to work could mean giving up a FECA benefit for a reduced pension amount, concerns have been raised by some that the program may provide incentives for beneficiaries to continue on the program beyond retirement age. In 1996, we reported on two alternative proposals to change FECA benefits once beneficiaries reach the age at which retirement typically occurs: (1) converting FECA benefits to retirement benefits, and (2) changing FECA wage-loss benefits to a newly established FECA annuity. The first proposal would convert FECA benefits for workers who are injured or become ill to regular federal employee retirement benefits at retirement age. In 1981, the Reagan administration proposed comprehensive FECA reform, including a provision to convert FECA benefits to retirement benefits at age 65. The proposal included certain employee protections, one of which was calculating retirement benefits on the basis of the employee’s pay at time of injury (with adjustments for regular federal pay increases). According to proponents, this change would improve agencies’ operations because their discretionary budgets would be decreased by FECA costs, and, by reducing caseload, it would allow Labor to better manage new and existing cases for younger injured workers. For example, a bill recently introduced in Congress includes a similar provision, requiring FECA recipients to retire upon reaching retirement age as defined by the Social Security Act. The second proposal, based on proposals that several agencies developed in the early 1990s, would convert FECA wage-loss compensation benefits to a FECA annuity benefit. These agency proposals would have reduced FECA benefits by a set percentage two years after beneficiaries reached civil service retirement eligibility. Proponents of this alternative noted that changing to a FECA annuity would be simpler than converting FECA beneficiaries to the retirement system, would result in consistent benefits, and would allow benefits to remain tax-free. Proponents also argued that a FECA annuity would keep the changed benefit within the FECA program, thereby avoiding complexities associated with converting FECA benefits under CSRS and FERS. For example, converting to retirement benefits could be difficult for some employees who currently are not participating in a federal retirement plan. Also, funding future retirement benefits could be a problem if the FECA recipient has not been making retirement contributions. Labor recently suggested a change to the FECA program that would reduce wage-loss benefits for Social Security retirement-aged recipients to 50 percent of their gross salary at the date of injury, but would still be tax-free. Labor’s proposal would still keep the changed benefit within the FECA program. In our 1996 report, however, we identified a number of issues with both alternative proposals. For example, some experts and other stakeholders we interviewed noted that age discrimination posed a possible legal challenge and that some provisions in the law would need to be addressed with new statutory language. Others noted that benefit reductions would cause economic hardships for older beneficiaries. Some noted that without the protections of the workers’ compensation program, injured employees who have few years of service or are ineligible for retirement might suffer large reductions in benefits. Moreover, opponents to change also viewed reduced benefits as breaking the workers’ compensation promise. Another concern was that agencies’ anticipation of reduced costs for workers’ compensation could result in fewer incentives to manage claims or to develop safer working environments. We also discussed in our 1996 report a number of issues that merit consideration in crafting legislation to change benefits for older beneficiaries. Going forward, Congress may wish to consider the following questions as it assesses and considers current reform proposals: (1) How would benefits be computed? (2) Which beneficiaries would be affected? (3) What criteria, such as age or retirement eligibility, would initiate changed benefits? (4) How would other benefits, such as FECA medical and survivor benefits, be treated and administered? (5) How would benefits, particularly retirement benefits, be funded? The retirement conversion alternative raises complex issues, arising in part from the fact that conversion could result in varying retirement benefits, depending on conversion provisions, retirement systems, and individual circumstances. A key issue is whether or not benefits would be adjusted. The unadjusted option would allow for retirement benefits as provided by current law. The adjusted option would typically ensure that time on the FECA rolls was treated as if the beneficiary had continued to work. This adjustment could (1) credit time on FECA for years of service or (2) increase the salary base (for example, increasing salary from the time of injury by either an index of wage increases or inflation, assigning the current pay of the position, or providing for merit increases and possible promotions missed due to the injury). Determining the FECA annuity would require deciding what percentage of FECA benefits the annuity would represent. Under previous proposals benefits would be two-thirds of the previous FECA compensation benefits. Provisions to adjust calculations for certain categories of beneficiaries also have been proposed. Under previous proposals, partially disabled individuals receiving reduced compensation would receive the lesser of the FECA annuity or the current reduced benefit. FECA annuity computations could also be devised to achieve certain benchmarks. For example, the formula for a FECA annuity could be designed to approximate a taxable retirement annuity. One issue concerning a FECA annuity is whether it would be permanent once set, or whether it would be subject to adjustments based on continuing OWCP reviews of the beneficiary’s workers’ compensation claim. Currently most federal employees are covered by FERS, but conversion proposals might have to consider differences between FERS and CSRS participants, and participants in any specialized retirement systems. Other groups that might be uniquely affected include injured workers who are not eligible for federal retirement benefits, individuals eligible for retirement conversion benefits, but not vested; and individuals who are partially disabled FECA recipients but active federal employees. With regard to vesting, those who have insufficient years of service to be vested might be given credit for time on the FECA rolls until vested. There is also the question of whether changes will focus on current or future beneficiaries. Exempting current beneficiaries delays receipt of full savings from FECA cost reductions to the future. One option might be a transition period for current beneficiaries. For example, current beneficiaries could be given notice that their benefits would be changed after a certain number of years. Past proposals have used either age or retirement eligibility as the primary criterion for changing benefits. If retirement eligibility is used, consideration must be given to establishing eligibility for those who might otherwise not become retirement eligible. This would be true for either the retirement conversion or the annuity option. At least for purposes of initiating the changed benefit, time on the FECA rolls might be treated as if it counted for service time toward retirement eligibility. Deciding on the criteria that would initiate change in benefits might require developing benchmarks. For example, if age were the criteria, it might be benchmarked against the average age of retirement for federal employees, or the average age of retirement for all employees. Another question is whether to use secondary criteria to delay changed benefits in certain cases. The amount of time one has received FECA benefits is one possible example of secondary criteria. Secondary criteria might prove important in cases where an older, injured worker may face retirement under the retirement conversion option even when recovery and return to work is almost assured. In addition to changing FECA compensation benefits, consideration should be given to whether to change other FECA benefits, such as medical benefits or survivor benefits. For example, the 1981 Reagan administration proposal would have ended survivor benefits under FECA for those beneficiaries whose benefits were converted to the retirement system. Another issue to consider is who will administer benefits if program changes shift responsibilities—OPM administers retirement annuity benefits for federal employees, and Labor currently administers FECA benefits. Although it may be advantageous to consolidate case management in one agency, such as OPM, if the retirement conversion alternative were selected, the agency chosen to manage the case might have to develop an expertise that it does not currently possess. For example, OPM might have to develop expertise in medical fee schedules to control workers’ compensation medical costs. For the retirement conversion alternative, another issue is the funding of any retirement benefit shortfall. Currently, agencies and individuals do not make retirement contributions if an individual receives FECA benefits; thus, if retirement benefits exceed those for which contributions have been made, retirement funding shortfalls would occur. Retirement fund shortfalls can be funded through payments made by agencies at the time of conversion or prior to conversion. First, lump-sum payment could be made by agencies at the time of the conversion. This option has been criticized because the start-up cost was considered too high. Second, shortfalls could be covered on a pay-as-you-go basis after conversion. In this approach, agencies might make annual payments to cover the shortfall resulting from the conversions. Third, agencies’ and employees’ contributions to the retirement fund could continue before conversion, preventing shortfalls at conversion. Proposals for the FECA annuity alternative typically keep funding under the current FECA chargeback system. This is an annual pay-as-you-go system with agencies paying for the previous year’s FECA costs. In total, these five questions provide a framework for considering proposals to change the program. In conclusion, FECA continues to play a vital role in providing compensation to federal employees who are unable to work because of injuries sustained while performing their duties. However, continued concerns that the program provides incentives for beneficiaries to remain on the program at, and beyond, retirement age have led to calls for the program to be reformed. Although FECA’s basic structure has not been significantly amended for many years, there continues to be interest in reforming the program. Proposals to change benefits for older beneficiaries raise a number of important issues, with implications for both beneficiaries and federal agencies. These implications warrant careful attention to outcomes that could result from any changes. Mr. Chairman, this concludes my prepared statement. I would be pleased to respond to any questions that you or other members of the committee may have at this time. For further information about this testimony, please contact Andrew Sherrill at (202) 512-7215 or sherrilla@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. In addition to the individual named above, key contributors to this testimony include Patrick Dibattista, H. Brandon Haller, Michelle Bracy, Tonnyé Conner-White, James Rebbe, Kathleen van Gelder, Walter Vance, and Matthew Saradjian. Federal Workers’ Compensation: Issues Associated with Changing Benefits for Older Beneficiaries. GAO-11-655T. Washington, D.C.: May 12, 2011. Federal Workers’ Compensation: Better Data and Management Strategies Would Strengthen Efforts to Prevent and Address Improper Payments. GAO-08-284. Washington, D.C.: February 26, 2008. Postal Service Employee Workers’ Compensation Claims Not Always Processed Timely, but Problems Hamper Complete Measurement. GAO-03-158R. Washington, D.C.: December 20, 2002. Oversight of the Management of the Office of Workers’ Compensation Programs: Are the Complaints Justified. GAO-02-964R. Washington, D.C.: July 19, 2002. U.S. Postal Service: Workers’ Compensation Benefits for Postal Employees. GAO-02-729T. Washington, D.C.: May 9, 2002. Office of Workers’ Compensation Programs: Further Actions Are Needed to Improve Claims Review. GAO-02-725T. Washington, D.C.: May 9, 2002. Federal Employees’ Compensation Act: Percentages of Take-Home Pay Replaced by Compensation Benefits. GGD-98-174. Washington, D.C.: August 17, 1998. Federal Employees’ Compensation Act: Issues Associated With Changing Benefits for Older Beneficiaries. GGD-96-138BR. Washington, D.C.: August 14, 1996. Workers’ Compensation: Selected Comparisons of Federal and State Laws. GGD-96-76. Washington, D.C.: April 3, 1996. Federal Employees’ Compensation Act: Redefining Continuation of Pay Could Result in Additional Refunds to the Government. GGD-95-135. June 8, 1995. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | This testimony discusses issues related to possible changes to the Federal Employees' Compensation Act (FECA) program, a topic that we have reported on in the past. At the end of chargeback year 2010, the FECA program, administered by the Department of Labor (Labor) had paid more than $1.88 billion in wage-loss compensation, impairment, and death benefits, and another $898.1 million for medical and rehabilitation services and supplies. Currently, FECA benefits are paid to federal employees who are unable to work because of injuries sustained while performing their federal duties, including those who are at or older than retirement age. Concerns have been raised that federal employees on FECA receive benefits that could be more generous than under the traditional federal retirement system and that the program may have unintended incentives for beneficiaries to remain on the FECA program beyond the traditional retirement age. Over the past 30 years, there have been various proposals to change the FECA program to address this concern. Recent policy proposals to change the way FECA is administered for older beneficiaries share characteristics with past proposals we have discussed in prior work. In August 1996, we reported on the issues associated with changing FECA benefits for older beneficiaries. Because FECA's benefit structure has not been significantly amended in more than 35 years, the policy questions raised in our 1996 report are still relevant and important today. This testimony will focus on (1) previous proposals for changing FECA benefits for older beneficiaries and (2) questions and associated issues that merit consideration in crafting legislation to change benefits for older beneficiaries. This statement is drawn primarily from our 1996 report in which we solicited views from selected federal agencies and employee groups to identify questions and associated issues with crafting benefit changes. For that report, we also reviewed relevant laws and analyzed previous studies and legislative proposals that would have changed benefits for older FECA beneficiaries. The perception that many retirement-age beneficiaries were receiving more generous benefits on FECA had generated two alternative proposals to change benefits once beneficiaries reach the age at which retirement typically occurs: (1) converting FECA benefits to retirement benefits and, (2) changing FECA wage-loss benefits by establishing a new FECA annuity. We also discussed a number of issues to be considered in crafting legislation to change benefits for older beneficiaries. Going forward, Congress may wish to consider the following questions in assessing current proposals for change: (1) How would benefits be computed? (2) Which beneficiaries would be affected? (3) What criteria, such as age or retirement eligibility, would initiate changed benefits? (4) How would other benefits, such as FECA medical and survivor benefits, be treated and administered? (5) How would benefits, particularly retirement benefits, be funded? The retirement conversion alternative raises complex issues, arising in part from the fact that conversion could result in varying retirement benefits, depending on conversion provisions, retirement systems, and individual circumstances. A key issue is whether or not benefits would be adjusted. The unadjusted option would allow for retirement benefits as provided by current law. The adjusted option would typically ensure that time on the FECA rolls was treated as if the beneficiary had continued to work. This adjustment could (1) credit time on FECA for years of service or (2) increase the salary base (for example, increasing salary from the time of injury by either an index of wage increases or inflation, assigning the current pay of the position, or providing for merit increases and possible promotions missed due to the injury). Currently most federal employees are covered by FERS, but conversion proposals might have to consider differences between FERS and CSRS participants, and participants in any specialized retirement systems. Other groups that might be uniquely affected include injured workers who are not eligible for federal retirement benefits, individuals eligible for retirement conversion benefits, but not vested; and individuals who are partially disabled FECA recipients but active federal employees. With regard to vesting, those who have insufficient years of service to be vested might be given credit for time on the FECA rolls until vested. There is also the question of whether changes will focus on current or future beneficiaries. Exempting current beneficiaries delays receipt of full savings from FECA cost reductions to the future. One option might be a transition period for current beneficiaries. For example, current beneficiaries could be given notice that their benefits would be changed after a certain number of years. Past proposals have used either age or retirement eligibility as the primary criterion for changing benefits. If retirement eligibility is used, consideration must be given to establishing eligibility for those who might otherwise not become retirement eligible. This would be true for either the retirement conversion or the annuity option. At least for purposes of initiating the changed benefit, time on the FECA rolls might be treated as if it counted for service time toward retirement eligibility. Deciding on the criteria that would initiate change in benefits might require developing benchmarks. In addition to changing FECA compensation benefits, consideration should be given to whether to change other FECA benefits, such as medical benefits or survivor benefits. For example, the 1981 Reagan administration proposal would have ended survivor benefits under FECA for those beneficiaries whose benefits were converted to the retirement system. For the retirement conversion alternative, another issue is the funding of any retirement benefit shortfall. Currently, agencies and individuals do not make retirement contributions if an individual receives FECA benefits; thus, if retirement benefits exceed those for which contributions have been made, retirement funding shortfalls would occur. Retirement fund shortfalls can be funded through payments made by agencies at the time of conversion or prior to conversion. |
Joint STARS is a multiservice, multimode radar system that is to provide the capability to locate, track, and classify wheeled and track vehicles beyond ground line of sight, during day and night, under most weather conditions. It is to provide Army Corps and Division commanders an “electronic high-ground” from which to observe enemy forces across the forward line of their own troops into an enemy’s first and second echelons. The Joint STARS radar is mounted on an Air Force E-8 aircraft, a Boeing 707 variant. It is to provide real-time information simultaneously to operators in the aircraft and operators in Army GSMs. These GSMs are to have the ability to supplement this radar data with unmanned aerial vehicle imagery and electronic intelligence reports. Through fiscal year 2001, the total cost of the Army’s Joint STARS development and acquisition is estimated at $1.4 billion. Since the Joint STARS program inception, four versions of GSMs have been developed prior to the CGS. They are the Limited Procurement Urgent, the Interim GSM, the Medium GSM, and the Light GSM. Descriptions of the various GSMs are provided in appendix I. Production quantities by fiscal year and GSM variant are detailed in table 1. The Army recently issued a solicitation for the CGS system and selected a contractor to produce the system. It awarded an 8-year production contract on December 14, 1995, and made a fiscal year 1996 commitment to the production of 18 systems, the maximum production allowed by the solicitation. The CGS system is to provide the same functionality as the Light GSM with an initial enhancement of the integration of secondary imagery data, and planned additional enhancements provided by post-award contract modifications. The CGS acquisition strategy provides for 2 years of LRIP, during which the Army anticipated buying 22 CGS systems at an estimated cost of about $138 million, though it received approval from DOD to procure up to an additional 16 CGS systems to accommodate other service and allied requirements. The Army’s first year commitment to 18 systems and current plan to acquire 16 systems in the second year raises the estimated 2-year LRIP cost to over $153 million. Regarding program cost, DOD stated that the CGS LRIP quantity includes not only the number needed for testing purposes, but considers production rate efficiencies and cost factors. It believes that producing only four prior to test would require the stop and restart of production, resulting in loss of skilled people, inefficient use of contractor resources, and higher costs. The CGS LRIP quantity does not, however, reflect consideration of production rate efficiencies and cost factors because under the CGS contract’s pricing structure, the planned second LRIP year acquisitions can be purchased in later years at lower cost. In sum, under the CGS contract, the Army can save millions of dollars by lowering future CGS LRIP acquisitions to the minimum quantity necessary to maintain the contract and then contracting for those systems in the post-LRIP years. The Light GSM and the Medium GSM were scheduled to be operationally tested during a Joint STARS multiservice OT&E. That test was delayed and then altered because of the deployment of Joint STARS assets to the European theater to support Bosnian operations. The Army now plans to evaluate the Medium and Light GSMs during that deployment and follow-on tests, if needed. It also plans to conduct an initial OT&E of the CGS system in the first quarter of fiscal year 1998. The degree and length of that initial OT&E will depend on how similar the CGS system is to its predecessors, which will be a function of the approach that the CGS contractor follows. The CGS solicitation provided functional specifications such that the proposals received may or may not represent significant hardware and software differences from already procured GSMs. The degree of technological difference between the CGS system and its predecessor systems, the Light GSM and Medium GSM, depends on the approach taken by the contractor. That difference will, in turn, influence the degree to which the Light and Medium GSM’s performance during any OT&E can and should be relied upon as an indicator of the CGS’s maturity to continue production. Furthermore, the more similar the CGS system is to its predecessors, the less extensive its initial OT&E will need to be. The Army began procuring CGS systems prior to the completion of an OT&E by any GSM. However, the Army did not perform any risk analyses demonstrating that there was (1) an urgent need for the added capabilities of the CGS system or (2) any significant benefit to be derived from its accelerated procurement. According to DOD, the revised CGS development and production schedule fields ground stations in synch with E-8C aircraft deliveries. Under the prior development schedule, the Army planned to continue to buy pre-CGS model ground stations—presumably also in synch with E-8C aircraft deliveries. Furthermore, an Army official in the program executive office that has oversight of the Army’s Joint STARS program stated that the Air Force is behind in its E-8C delivery schedule and that, as a result, GSM acquisition is currently scheduled ahead of aircraft fieldings. Over the years, we have reported on numerous instances in which production of both major and nonmajor systems were optimistically permitted to begin under LRIP and continue based on factors other than the systems’ technical maturity. In our November 1994 report on the use of LRIP in the acquisition process, we detailed a number of examples of systems that entered LRIP before operational tests were conducted and that later experienced significant problems. For example, a year into the LRIP of the Navy T-45A aircraft, OT&E demonstrated that the T-45A was not effective in a carrier environment and was not operationally suitable because of safety deficiencies. Subsequent major design changes included a new engine, new wings, and a modified rudder. DOD believes that, unlike the Navy T-45A aircraft, the CGS is not a new, immature system. It has stated that the CGS system uses 100 percent of the Light GSM mechanical design, rack structure, power distribution, lighting, ventilation, and air conditioning. It has also stated that the Light GSM software baseline is the CGS baseline and that the CGS system represents the Light GSM functional baseline with the addition of product improvements. However, the CGS contractor may make configuration changes that could represent significant hardware and software differences from already procured GSMs. Furthermore, DOD’s position is also contradicted by the 2-year delay of the GSM full-rate production decision to follow a CGS OT&E and by the Joint STARS integrated product team’s call for an independent assessment of the CGS’s testing risk, given the nature and extent of the configuration changes that the selected contractor may make. The risks of systems starting production before operational tests are conducted are numerous. They include reliability that is significantly less than expectations, systems that cannot meet current specifications, systems that are never fielded and/or retired after fielding because of poor performance, and systems that require significant and expensive post-fielding repairs for faults identified during OT&E. While there is an operational need for Joint STARS, and despite the desire of operational commanders to have more capable systems as soon as possible, the fact remains that the Army has not adequately justified the urgency or benefits to be derived from accelerated fielding of the CGS in 1998 versus the originally planned fielding in fiscal year 2002. The Army’s CGS acquisition strategy seems to ignore the fact that to date the GSMs have undergone limited testing and demonstrated disappointing results in those tests. That acquisition strategy allowed the Army to begin procuring CGS systems without demonstrating resolution of issues raised as a result of prior tests and will allow it to continue procuring systems without demonstrating resolution of those issues. In December 1991, a decision was made that the Medium GSM would undergo a limited user test rather than a traditional initial OT&E. The absence of important functionality, including an unmanned aerial vehicle interface, a production representative data link, Defense Mapping Agency electronic map databases, and trained military operators, prompted this decision. Based on the results of this test, which occurred in early 1993, the Army Operational Test and Evaluation Command provided an overall assessment of the Medium GSM’s performance. It stated that the Medium GSM “consistently demonstrated potential to be operationally effective” and that the Medium GSM “demonstrated potential to be operationally suitable” (emphasis added). However, this was not a finding that the Medium GSM was operationally effective or suitable. The Command also noted that the “current software lacks robustness and reliability, and limits mission performance.” One of the Command’s recommendations was that prior to LRIP fielding, the Medium GSM “must successfully complete an independently evaluated operational demonstration including simultaneous employment of all software, interface, and tactics, techniques, and procedures corrections.” The Medium GSM has yet to successfully complete an independently evaluated operational test. Its initial OT&E was to be the multiservice OT&E. The Medium GSM follow-on system, the Light GSM, was also to participate in the multiservice OT&E. Like the Medium GSM, the Light GSM has yet to complete an OT&E. The Light GSM has, however, undergone other tests, including a Force Development Test and Evaluation (FDT&E) in September 1994; reliability confidence testing from October through December 1994; and a follow-on demonstration at Eglin Air Force Base in January 1995. In May 1995, we reported to the Secretary of Defense that based on a preliminary review of those test results, it was clear that the Light GSM had not met the DOD-set LRIP exit criteria and that our preliminary analysis indicated that, at best, the Light GSM had only passed 2 of the 12 Light GSM performance-related LRIP exit criteria. At the same time, the DOD Director of OT&E concluded that the Light GSM had only passed 1 of the 12 Light GSM performance-related LRIP exit criteria. The Director recommended a formal review of the program to identify the causes of problems, solutions, and appropriate tests to demonstrate the solutions. In a June 30, 1995, memorandum, the Director, commenting on efforts to resolve 55 specific problems identified in the Light GSM testing, stated that his goal “was to see that the Army had identified the key problems and was working effective fixes for those problems.” He added that he wanted the Joint STARS multiservice OT&E “to have a reasonable chance of success.” According to an OT&E official, the Director’s assessment of the Light GSM’s performance during those tests has not changed. The issue of the 55 specific problems was resolved based on the Director’s satisfaction “that the Army has identified a process to fix the various problems that have been identified . . . .” In some instances, problems were attributed to shortfalls in operator training or another non-materiel cause. The majority of deficiencies involved software fixes, not major hardware redesign. The Army has also gained experience operating the GSMs assigned to the III Corps and XVIII Airborne Corps and in training and preparation for multi-service OT&E. In November 1995, the Program Executive Officer for Joint STARS certified the system ready for OT&E, which attests to the developer’s confidence in system maturity. DOD believes that the GSMs’ prior test results indicate only prudent program risk. It states that the series of tests used in development of the GSMs, including a limited user test, FDT&E, reliability confidence testing, and other demonstrations, have been a continuous fix-test-fix process, which has identified shortfalls, determined fixes, and verified or tested the results. It also notes that during the current deployment of Joint STARS to the European Theater (Bosnia-Herzegovina), members of the Army and the Air Force test commands will conduct an operational evaluation of Joint STARS performance. Although the Army and the Air Force plan to operationally evaluate Joint STARS during that deployment, how well the Army’s process has worked remains to be demonstrated through the Light GSM’s performance during an OT&E. The Army’s commitment to its currently planned second year LRIP buy of 16 CGS systems prior to the completion of the CGS OT&E would raise not only the program’s risk but also its cost. The CGS contract provides decreasing unit costs over its 8-year life. Furthermore, a program official stated and our review of the contract indicates that the Army needs to commit to only one CGS system in the second LRIP year to maintain the contract. If the Army buys one system in fiscal year 1997 and 37 systems in the third and fourth years of the contract, it could save over $5 million while obtaining the same 4-year buy of 56 systems currently anticipated given its fiscal year 1997 budget request and approved acquisition strategy. These savings can be seen in a comparison of tables 2 and 3. Table 2 details the first 4 years of the contract’s variable CGS acquisition costs under the Army’s anticipated future buy schedule. Table 3 details the first 4 years of those costs under a plan that minimizes the size of the second year LRIP commitment. The Army lacks an analysis justifying a need to accelerate the fielding of the CGS system and can save millions of dollars by minimizing production in its second year of CGS production. Furthermore, there are inherent risks in procuring systems prior to their successful completion of an OT&E and the benefits of the Army’s acquisition strategy do not clearly outweigh the associated risks. We therefore recommend that the Secretary of Defense direct the Secretary of the Army to limit the future system procurement to the minimum quantity necessary to maintain the CGS contract (i.e. one system in each contract option year) until the CGS has successfully completed an OT&E. In commenting on a draft of this report, DOD disagreed with our conclusion that the Army’s CGS acquisition strategy was unnecessarily risky and our recommendation to reduce that risk. DOD took the position that the acquisition strategy espouses prudent risk in balance with program cost, schedule, and technical requirements. DOD’s comments are reprinted in their entirety in appendix II. In light of DOD’s unwillingness to have the Army revise its acquisition strategy for the CGS, Congress may wish to take the actions necessary to limit the number of CGS systems to be procured under LRIP prior to the CGS successfully completing operational testing. During this review, we interviewed officials at and reviewed documents from the offices of the Under Secretary of Defense for Acquisition and Technology and the Director for Operational Test and Evaluation in Washington, D.C. We also visited officials and reviewed documents from the U.S. Army Materiel Systems Analysis Activity, Aberdeen, Maryland, and the U.S. Army Communications and Electronics Command, Office of the Program Manager for Joint STARS, Fort Monmouth, New Jersey. We conducted this review from August 1995 to April 1996 in accordance with generally accepted government auditing standards. We are sending copies of this report to other appropriate congressional committees; the Director, Office of Management and Budget; and the Secretaries of Defense, the Army, and the Air Force. We will also make copies available to other interested parties upon request. Please contact me at (202) 512-4841 if you or your staff have any questions concerning this report. Major contributors to this report were Thomas J. Schulz, Charles F. Rey, Bruce H. Thomas, and Gregory K. Harmon. Limited Procurement Urgent (LPU). The LPU GSMs were produced and deployed as replacements to the AN/UPD-7 Ground Station Terminal. They receive data from the Mohawk Side Looking Airborne Radar and do not receive/process data from Joint Surveillance Target Attack Radar System (Joint STARS) E8 aircraft. The Army acquired nine LPU GSMs. They are expected to be decommissioned no later than fiscal year 1997. Interim Ground Station Module (GSM). The Interim GSM receives and processes data from both the Joint STARS E8 aircraft and the Mohawk Side Looking Airborne Radar. Eight engineering and manufacturing development Interim GSMs were developed and fielded to the XVIII Airborne. These systems represent the current GSM contingency force. The Interim GSM was deployed to Operation Desert Storm/Desert Shield. No production is planned. Medium GSM. This module provides enhancements to the Interim GSM capability. Its development stemmed from a Department of Defense (DOD) decision that was made in fiscal year 1989 to restructure the Army Joint STARS GSM program. The Medium GSM enhancements include a downsized electronic suite, an enhanced man/machine interface with extensive Built In Test/Built In Test Equipment capabilities, and the ability to simultaneously display and analyze data from multiple sensors. The Army acquired 12 Medium GSMs. Light GSM. This module is housed in a light weight multipurpose shelter, a standard integrated command post shelter variant, mounted on a High Mobility Multi-Purpose Wheeled Vehicle. It is to provide the light/contingency forces a C130 Drive-on/Drive-off Joint STARS capability. The Light GSM has a prime and support vehicle, each with a trailer/generator in tow. It is supposed to be able to operate on the move, receive unmanned aerial vehicle imagery and intelligence reports, and incorporate electronic map backgrounds. The Army plans to acquire a total of 10 Light GSMs. Common Ground Station (CGS). The CGS system is to provide Light GSM functionality with the addition of the integration of secondary imagery data. Further enhancements are expected and are to be achieved through post-award modifications to the contract. Two versions of this ground station are being contemplated (i.e., a light and heavy CGS). The Light CGS will be patterned on the Light GSM two-vehicle configuration. The heavy CGS is to be a track-mounted system, intended to provide the heavy forces a high speed, cross-country/off-road GSM. It is to be integrated into a Bradley Fighting Vehicle variant. Integration of the CGS capability into a tracked vehicle is part of the preplanned product improvement initiatives and will not be included in the fiscal year 1996 CGS contract award. Initial CGS fielding is planned for fiscal year 1998. The Army currently anticipates the acquisition of 73 CGS systems. The following are GAO’s comments on DOD’s letter dated January 24, 1996. 1. While the CGS contractor has prior experience developing and producing ground stations, those ground stations have undergone limited testing and demonstrated disappointing results. Among its previous work, the CGS contractor developed and produced the two immediate predecessor GSMs to the CGS, the Medium and Light GSMs. As we stated in our report, based on the results of a limited user test of the Medium GSM, the Army Operational Test and Evaluation Command stated that the Medium GSM consistently demonstrated the potential to be operationally effective and the potential to be operationally suitable. It noted that the “current software lacks robustness and reliability, and limits mission performance.” It recommended, among other things, that prior to LRIP fielding the Medium GSM “must successfully complete an independently evaluated operational demonstration including simultaneous employment of all software, interface, and tactics, techniques, and procedures corrections.” Furthermore, the Light GSM passed only 1 of 12 performance-related criteria during developmental testing, and neither the Medium nor the Light GSM has yet successfully completed an OT&E. 2. We continue to believe that the CGS acquisition strategy risks millions of dollars on systems that have not yet been demonstrated operationally effective and suitable. We have, however, revised the report to reflect the Army’s apparent commitment to evaluate the operation of the Joint STARS system during deployment to Bosnia-Herzegovina. 3. We have revised our recommendation to allow the Army to maintain its CGS contract in effect and thus avoid a break in production. Because the contract provides decreasing unit costs over its life, and since the Army has already committed to 18 first-year LRIP systems, we want to further limit LRIP pending successful completion of an OT&E. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO reviewed the Department of the Army's test and acquisition plans for the Common Ground Station (CGS), the fifth version of the Joint Surveillance Target Attack Radar System (Joint STARS) ground station modules (GSM). GAO found that: (1) the Army planned to purchase 22 CGS in two years of low-rate initial production (LRIP) at a cost of $138 million, but it now plans to procure 34 CGS systems; (2) the Army has neither demonstrated an urgent need for CGS nor proved that the expected benefits from accelerated procurement outweigh its risks; (3) by 1998, the Army will need at least four CGS to complete operational test and evaluation; (4) since earlier versions of CGS have not tested well or completed an operational test and evaluation, the Army's acceleration of CGS LRIP increases the risk of procuring a costly and ineffective system; and (5) because the Army is only required to purchase one CGS in the second year of LRIP, it could significantly reduce system costs by procuring fewer systems in the early stages of the contract. |
The Davis-Bacon Act was enacted in 1931, in part, to protect communities and workers from the economic disruption caused by contractors hiring lower-wage workers from outside their local area, thus obtaining federal construction contracts by underbidding competitors who pay local wage rates. Labor administers the act through its Wage and Hour Division, which conducts voluntary surveys of construction contractors and interested third parties on both federal and nonfederal projects to obtain wages paid to workers in each construction job by locality. It then uses the data submitted on these survey forms to determine locally prevailing wage and fringe benefit rates for its four construction types: building, heavy, highway, and residential. To determine a prevailing wage for a specific job classification, Labor considers sufficient information to be the receipt of wage data on at least three workers from two different employers in its designated survey area. Then, in accordance with its regulations, Labor calculates the prevailing wage by determining if the same wage rate is paid to the majority (more than 50 percent) of workers employed in a specific job classification on similar projects in the area. If the same rate is not paid to the majority of workers in a job classification, the prevailing wage is the average wage rate weighted by the number of employees for which that rate was reported. In cases where the prevailing wage is also a collectively bargained, or union, rate, the rate is determined to be “union-prevailing.” To issue a wage determination—a compilation of prevailing wage rates for multiple job classifications in a given area—Labor must, according to its procedures, also have sufficient data to determine prevailing wages for at least 50 percent of key job classifications. Key job classifications are those determined necessary for one or more of the four construction types. By statute, Labor must issue wage determinations based on similar projects in the “civil subdivision of the state” in which the federal work is to be performed. Labor’s regulations state the civil subdivision will be the county, unless there are insufficient wage data. When data from a county are insufficient to issue a wage rate for a job classification, a group of counties is created. When data are still insufficient, Labor includes data from contiguous counties, combined in “groups” or “supergroups” of counties, until sufficient data are available to meet threshold guidelines to make a prevailing wage determination. Expansion to include other counties, if necessary, may continue until data from all counties in the state are combined. Counties are combined based on whether they are metropolitan or rural, and cannot be mixed. Labor has taken several steps over the last few years to address issues with its Davis-Bacon wage surveys. For example, it finished 22 open surveys that had accumulated since the agency started conducting statewide surveys in 2002. Officials said completing these surveys will allow them to focus on more recent surveys. Labor also changed how it collects and processes information for its four construction types by surveying some construction types separately rather than simultaneously, using other available sources of wage data, adjusting survey time frames, and processing survey data as it is received rather than waiting until a survey closes. For highway surveys, Labor officials said they began using certified payrolls as the primary data source because certified payrolls provide accurate and reliable wage information and eliminate the need for Labor to verify wage data reported in surveys. Labor officials estimated these changes will reduce the processing time for highway surveys by more than 80 percent, or from about 42 months to 8 months. For building and heavy surveys, Labor began a five-survey pilot in 2009, adjusting survey time frames—with shorter time frames for areas in which there are many active projects—to allow Labor to better manage the quantity of data received. In addition, Labor officials said their regional office staff have begun processing survey data as they are received rather than waiting until a survey closes, which, they said, will improve timeliness and accuracy because survey respondents will be better able to recall submitted information when contacted by regional office staff for clarification and verification. Labor expects these changes to reduce the time needed to process building and heavy surveys by approximately 54 percent, or from about 37 months to 17 months. However, while it is too early to fully assess the effects of Labor’s 2009 actions, our review found that changes to data collection and processing may not achieve expected results. We were able to analyze the timeliness of 12 of the 16 surveys conducted under Labor’s new processes at the time of our review. Of those 12 surveys—8 highway and 4 building and heavy—which we assessed against Labor’s revised timelines, we found 10 behind schedule, 1 on schedule, and 1 not started as of September 10, 2010. A challenge to survey timeliness is the fact that Labor conducts a “universe” or “census” survey of all active construction projects within a designated time frame and geographic area. As a result, the number of returned survey forms and the time required for the regional offices to process the data can vary widely. For example, for 14 surveys conducted prior to Labor’s 2009 changes, the number of forms returned per survey ranged from less than 2,000 to more than 8,000, and the average processing time per survey for data clarification and analysis ranged from 10 months to more than 40. Moreover, Labor cannot entirely control when it receives survey forms. Some regional office officials said the bulk of the forms are returned on the last day of a survey limiting officials’ ability to gain time by processing forms while the survey is ongoing as planned under the 2009 changes. To address these challenges, OMB guidance suggests agencies consider the cost and benefits of conducting a sample survey (versus a census survey) because it can often ensure data quality in a more efficient and economical way. The fact that Labor is behind schedule on surveys begun under the new processes may affect its ability to update the many published nonunion- prevailing wage rates which are several years old. Labor’s fiscal year 2010 performance goal was for 90 percent of published wage rates for building, heavy, and highway construction types to be no more than 3 years old. Our analysis found that 61 percent of published rates for these construction types were 3 years old or less. However, this figure can be somewhat misleading because of the difference in how union- and nonunion- prevailing wage rates are updated. Union-prevailing rates account for almost two-thirds of the more than 650,000 published building, heavy, and highway rates and, according to Labor’s policy, can be updated when there is a new collective bargaining agreement without Labor conducting a new survey. We found almost 75 percent of those rates were 3 years old or less. However, 36 percent of the nonunion-prevailing wage rates were 3 years old or less and almost 46 percent were 10 or more years old. These rates are not updated until Labor conducts a new survey. Several of the union and contractor association representatives we interviewed said the age of the Davis-Bacon nonunion-prevailing rates means they often do not reflect actual prevailing wages, which can make it difficult for contractors to successfully bid on federal projects. Beyond concerns with processes and timelines, we also found that critical problems with Labor’s wage survey methodology continue to hinder its survey quality. OMB guidance states that agencies need to consider the potential impact of response rate and nonresponse on the quality of information obtained through a survey. A low response rate may mean the results are misleading or inaccurate if those who respond differ substantially and systematically from those who do not respond. However, Labor cannot determine whether its Davis-Bacon survey results are representative of prevailing wages because it has not calculated survey response rates since 2002, and, other than a second letter automatically sent to nonrespondents, does not currently have a program to systematically follow up with or analyze nonrespondents. While a senior Labor official said the agency is taking steps to again calculate response rates, these changes have not been fully implemented and it is unclear if they will result in improved survey quality. The utility of issuing wage determinations at the county level is also questionable. Labor’s regulations state the county will normally be the civil subdivision for which a prevailing wage is determined; however, Labor is often unable to issue wage rates for job classifications at the county level because it does not collect enough data to meet its current sufficiency standard of wage information on at least three workers from two employers. In the results from the four surveys we reviewed, Labor issued about 11 percent of wage rates for key job classifications using data from a single county (see fig. 1).24, 25 Moreover, in 1997, Labor’s OIG reported that issuing rates by county may cause wage decisions to be based on an inadequate number of responses. In the four surveys we reviewed, more than one-quarter of the wage rates were based on data reported for six or fewer workers (see fig. 2). We analyzed wage rates for key job classifications because wage rates for nonkey job classifications can only be issued at the county or group level, but not at the supergroup or state level. Regional office officials said they may combine rates from counties with the exact same wage and fringe benefit data in their final wage compilation report, the WD-22. However, the rates being combined may have been calculated at different geographic levels—for example, one county’s rates may have been calculated at the group level while another county’s rates my have been calculated at the supergroup level. The geographic level at which rates for combined counties were calculated is not reported on the WD-22; therefore, we reported the percentage of these rates separately. In our interviews with stakeholders, concerns about the survey process and accuracy of the published wage determinations were cited as disincentives to participate. Contractors may lack the necessary resources, may not understand the purpose of the survey, or may not see the point in responding because they believe the prevailing wages issued by Labor are inaccurate, stakeholders told us. Officials we interviewed in Labor regional offices echoed many of these same concerns about contractor participation. While 19 of the 27 contractors and interested parties we interviewed said the survey form was generally easy to understand, some identified challenges with completing specific sections, such as how to apply the correct job classification. Labor officials said they did not pretest the current survey form with respondents, and our review of reports by Labor’s contracted auditor for four published surveys found most survey forms, which are verified against payroll data, had errors in areas such as number of employees and hourly and fringe benefit rates. Labor officials said they have plans to address portions of the form that confuse respondents, but could not provide specifics on how they intend to solicit input from respondents—a step recommended by OMB to reduce error. Fifteen stakeholders we interviewed said there is a lack of transparency in wage determinations because key information is not available or hard to find. Both contractor associations and union officials said improving transparency in how the published wage rates are set could enhance understanding of the process and result in greater participation in the survey. A senior Labor official said the agency is considering posting information used to determine wage rates online. Finally, while the pre-survey briefing is one of Labor’s primary outreach efforts to inform stakeholders about upcoming surveys, awareness of these briefings was mixed. In three states that were surveyed for building and heavy construction in 2009 or 2010—Arizona, North Carolina, and West Virginia—all the union representatives we interviewed said they were aware of the pre-survey briefing and representatives from four of the six state contractor associations we interviewed said they were aware a briefing had been conducted. However, in Florida and New York—last surveyed in 2005 and 2006 respectively—none of the 12 contractors we interviewed were aware that a briefing had been conducted prior to the survey. Seven of 27 stakeholders indicated that alternative approaches, such as webinars or audioconferences, might be helpful ways to reach additional contractors. While Labor has made some changes to improve the wage determination process, further steps are needed to address longstanding issues with the quality of wage determinations and enhance their transparency. In our report, we suggested that Congress consider amending its requirement that Labor issue wage rates by civil subdivision to provide the agency with more flexibility. To improve the quality and timeliness of the wage surveys, we recommended that Labor enlist an independent statistical organization to evaluate and provide objective advice on the survey, including its methods and design; the potential for conducting a sample survey instead of a census survey; the collection, processing, tracking and analysis of data; and the promotion of survey awareness. We also recommended that Labor take steps to improve the transparency of its wage determinations, which could encourage greater participation in its survey. After reviewing the draft report, Labor agreed with our recommendation to improve transparency, but said obtaining expert survey advice may be premature, given current and planned changes. We believe a time of change is exactly when the agency should obtain expert advice to ensure their efforts improve the quality of the wage determination process. A complete discussion of our recommendations, Labor’s comments, and our response are provided in our report. Chairman Walberg, Ranking Member Woolsey, and Members of the Subcommittee, this concludes my prepared remarks. I would be happy to answer any questions you may have. For further information regarding this statement, please contact Andrew Sherrill at (202) 512-7215 or sherrilla@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this testimony include Gretta L. Goodwin (Assistant Director), Amy Anderson, Brenna Guarneros, Susan Aschoff, Walter Vance, Ronald Fecso (Chief Statistician), Melinda Cordero, Mimi Nguyen, and Alexander Galuten. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | This testimony discusses the Department of Labor's (Labor) procedures for determining prevailing wage rates under the Davis-Bacon Act. Davis-Bacon wages must be paid to workers on certain federally funded construction projects, and their vulnerability to the use of inaccurate data has long been an issue for Congress, employers, and workers. More recently, the passage of the American Recovery and Reinvestment Act of 2009, focused attention on the need for accurate and timely wage determinations, with more than $300 billion estimated to provide substantial funding for, among other things, federally funded building and infrastructure work potentially subject to Davis-Bacon wage rates. In the 1990s, we issued two reports that found process changes were needed to increase confidence that wage rates were based on accurate data. A third report found that changes then planned by Labor, if successfully implemented, had the potential to improve the wage determination process. However, in 2004, Labor's Office of Inspector General (OIG) found that wage data errors and the timeliness of surveys used to gather wage information from contractors and others, continued to be issues. The testimony will discuss (1) the extent to which Labor has addressed concerns regarding the quality of the Davis-Bacon wage determination process and (2) additional issues identified by stakeholders regarding the wage determination process. This testimony is based on our recently issued report, titled "Davis-Bacon Act: Methodological Changes Needed to Improve Wage Survey." In summary, we found that recent efforts to improve the Davis-Bacon wage survey have not yet addressed key issues with survey quality, such as the representativeness and sufficiency of survey data collected. Labor has made some data collection and processing changes; however, we found some surveys initiated under these changes were behind Labor's processing schedule. Stakeholders said contractors may not participate in the survey because they do not understand its purpose or do not believe the resultant prevailing wages are fully accurate. In addition, they said addressing a lack of transparency in how the published wage rates are set could result in a better understanding of the process and greater participation in the survey. We suggest Congress consider amending its requirement that Labor issue wage rates by civil subdivision to allow more flexibility. To improve the quality and timeliness of the Davis-Bacon wage surveys, we recommend Labor obtain objective expert advice on its survey design and methodology. We also recommend Labor take steps to improve the transparency of its wage determinations. |
Founded in 1863 by congressional charter, the National Academy of Sciences has a long history of serving as a scientific adviser. The Academy, which has a total membership of 4,800, also serves as an honorary institution to recognize distinguished members of the scientific community. Among other activities, the Academy also organizes symposiums, manages scientific databases, and serves as a clearinghouse for research. Throughout this report we use “Academy” to refer to the constituent members of the Academy complex: the National Academy of Sciences, the National Academy of Engineering, the Institute of Medicine, and the National Research Council. In 1916, the Academy formed the National Research Council to broaden its committee membership to include non-Academy members and to oversee the Academy’s advisory activities. In a 1998 report, the Academy reported that committee membership consists of 55 percent from academia, 24 percent from industry, 9 percent from nonprofit institutions, and 12 percent from different levels of government. The National Academy of Engineering and the Institute of Medicine were established in 1964 and 1970, respectively, to recognize distinguished members in these fields and to provide more specialized advice in these areas. The Academy is organized by study units, which produce reports in the following topic areas: transportation, health and safety, science, commerce, natural resources, defense, space, education, and international affairs. (See table 1.) The Academy issued 1,331 committee reports from January 1993 to June 1997 and had an average annual budget of about $150 million. During those 5 years, most of its work was performed for the federal government, which provided the Academy with 87 percent of its revenue. (See fig. 1.) The Departments of Transportation, Energy, Health and Human Services, and the Army; the National Science Foundation; and the National Aeronautics and Space Administration have been its largest federal sponsors—amounting to 75 percent of the total revenues for 1993 to 1997. The Academy also advises state governments, private industry, and nonprofit institutions, but that work is limited by internal Academy guidelines. In addition, the Academy may use its endowment to fund self-initiated studies deemed critical by the Academy leadership. The Federal Advisory Committee Act Amendments of 1997 addressed concerns over the openness of the Academy’s procedures. Prior to the amendments, the Academy’s committee procedures included some openness. A 1975 policy document stated that committee meetings where data would be gathered were to be open to the public with advance notice given. Announcements of scheduled open meetings were published monthly in a newsletter by the Academy’s Office of Information. However, the study unit heads determined which projects would have scheduled and announced open meetings. Executive meetings and working meetings, referred to as deliberative sessions, would “not normally be open to the public.” A 1995 proposed change to the Academy’s public access policy, among other things, further defined the types of meetings that could be closed and applied the policy uniformly across the Academy’s major study units. This proposal was under consideration at the time the amendments were enacted. According to Academy officials, the Academy had three main concerns that caused it to seek relief from the Federal Advisory Committee Act: (1) the erosion of independence if the Academy was under the influence of sponsoring agencies, (2) the inability to recruit committee members if committee deliberations were open to the public, and (3) the burden of administrative requirements that would render the Academy unresponsive to the government. Paramount among these concerns was the Academy’s independence from the influence of sponsoring agencies. Under the act, a federal government officer or employee would have to chair or be present at every advisory committee meeting. This individual would have the power to adjourn the meeting “whenever he determines it to be in the public’s interest.” According to Academy officials, the Academy could lose sole authority in appointing committee members, and the Academy and committee members could be under pressure from a sponsoring agency to change a report during the drafting process. Under the act and GSA regulations, advisory committee meetings, including deliberative meetings, would be open to the public. However, the Academy opposed opening its deliberative meetings to the public because it believed that such an action could stifle open debate and criticism of ideas in those meetings. The Academy was also concerned that the independence of the committees’ deliberations and the Academy’s review process would be jeopardized by attempts of sponsors and special interest groups to bring political pressure to bear. Academy officials said that closed committee deliberations are fundamental for ensuring the independence of their studies and the scientific quality of their reports. Moreover, they stated, if draft reports were available to the public, the first draft would become the enduring impressions of a report, regardless of any changes made later. In addition, the President of the Academy said that it could be more difficult to recruit potential committee members in the future if deliberations were open to the public. We surveyed 12 current and former Academy committee members to obtain their views on whether or not they would serve on Academy committees if the deliberative meetings were open to the public. Two members said that they would serve, six said that their decision to serve would depend on the topic of study, and three said that they probably would not serve on a committee whose deliberations were open to the public. One member did not respond directly to the question but said that closed deliberative sessions encourage greater candor among the members. In addition, these members generally echoed the Academy officials’ views regarding the need for closed deliberative sessions. The three members who responded that they would probably not serve said that open deliberations could seriously jeopardize the quality of the reports. Two members said that Academy study committees might be difficult to staff if deliberations were open to the public. Eleven out of 12 respondents indicated that the Academy should retain the ability to close committee deliberations. Finally, the Academy was concerned that the amount of time and expense associated with implementing the act would render the Academy unresponsive to the government in general and to the Congress in particular. Of particular concern was the requirement under the act that each committee have a charter. Since the Academy is not a federal agency, the federal agency sponsoring the Academy study would prepare the charter and submit it for review by GSA. Academy officials estimated that the process would take between 6 and 12 months, on average, a length of time that an Academy official said would render the Academy unresponsive to the government’s requests for information. In addition, most of the Academy’s studies are funded by multiple agencies. Thus, the Academy was not certain which agency would be responsible for fulfilling the administrative requirements of the act. Academy officials also pointed out that applying the act to the Academy would more than double the number of committee charters that GSA would have to review each year. Prior to the enactment of the amendments of 1997, the Academy established a number of procedures for committee work that are intended to help ensure the integrity and the openness of committee activities. The procedures consist of the following phases: project formulation, committee selection, committee work, report review, and report release and dissemination. (See fig. 2.) According to Academy officials, the whole process can take anywhere from 4 months to 2 years (usually from 6 to 18 months). During the project formulation phase, the Academy assigns the project to a study unit. According to Academy guidance, the study unit is responsible for defining the scope of the project, leaving room for the committee to further define the study, and for developing the initial cost estimates. After the study unit approves the project, the Academy gives final approval for the project. Then a contract, grant, or cooperative agreement (depending on the sponsor) is drawn up and entered into with the agency. A permanent Academy staff member, referred to as the responsible staff officer, is assigned to organize and support the project. The staff officer is responsible for ensuring that institutional procedures and practices are followed throughout the study and that the study stays on schedule and within budget. According to the Academy’s documents, each project is conducted by a committee of subject matter experts who serve without compensation. Committee selection starts with suggestions from the sponsoring organization, members of the Academy, outside professional colleagues, and Academy staff. After review of the suggestions, the President of the Academy selects committee candidates. The Academy’s procedures require that each committee candidate fill out a form on his or her potential conflicts of interest. The form consists of five questions asking for the member’s relevant organizational affiliations, financial interests, research support, government service, and public statements and positions concerning the committee’s topic. We reviewed a sample (about 10 percent) of the 331 current committees to determine whether the forms had been filed and found that the Academy’s procedures were generally being followed. Under Academy procedures, 5 of the 30 committees selected were not required to file the conflict-of-interest forms because they were not subject to section 15 for various reasons. Of the remaining 25 committees, we found that almost all members (316 out of 341 or 93 percent) had forms on file. At the first meeting of every committee, the Academy’s procedures require a confidential discussion among committee members and project staff of potential conflicts of interest. If a conflict of interest is identified, the committee member may be asked to resign from the committee. If the Academy determines that the conflict is unavoidable, the Academy will make the conflict public and will retain the committee member. After this meeting, the executive director of the relevant study unit makes a tentative determination of whether the committee as constituted is composed of individuals with the requisite expertise to address the task and whether the points of view of individual members are adequately balanced such that the committee as a whole can address its charge objectively. Final approval of the committee membership, however, rests with the President of the Academy. Committees meet in data-gathering sessions that are generally open to the public and in deliberative sessions that are closed to the public. The Academy defines a data-gathering meeting as “any meeting of a committee at which anyone other than committee members or officials, agents, or employees of the institution is present, whether in person or by telephone or audio or video teleconference.” Committees also meet in closed sessions to discuss financial and personnel matters, to discuss conclusions, and to draft the committee report. The Academy’s responsible staff officer facilitates the meetings. In order to identify the number of open versus closed meetings, we reviewed the meetings held from December 1997 through June 1998 for the 331 committees. Since we found that most meetings were a combination of open and closed sessions, we identified the number of open and closed hours during these meetings. Of the 331 committees, 129 either had no meetings or were not subject to section 15 for various reasons. The remaining 202 committees held a total of 353 meetings. For 300 (or 85 percent) of those meetings, at least some portion of the meeting was closed. For 139 of the 300 meetings where complete information about open and closed sessions was available, we found that slightly less than half (45 percent) of the time was spent in closed sessions. For 251 projects, we determined the reasons for the closed sessions: 61 meetings included discussions of potential bias of committee members, 36 meetings included discussions of the committee’s composition and balance, and 201 meetings involved drafting the committee report. We also found that seven data-gathering meetings were closed under Freedom of Information Act exemptions. Every report is the collective product of the committee. According to the Academy’s documents, a committee member may draft a chapter or portion of a report, but the author of record is the entire committee. The Academy’s responsible staff officer can help with many aspects of developing the report, including researching, integrating portions of the report written by committee members, and ensuring consistent style and format, but the conclusions and recommendations are attributed to the committee as a whole. Throughout its work, the committee is subject to the oversight of the Academy’s supervisory boards and commissions. The next step in the process is an independent review of the draft by individuals whose review comments are provided anonymously to the study committee. This process allows the Academy to exercise internal oversight and provides an opportunity for the study committee to obtain reactions from a diverse group of people with broad technical and policy expertise in the areas addressed by the report. The anonymity of the reviewers is intended to encourage individual reviewers to express their views freely and to permit the study committee to evaluate each comment on its merits without regard for the reviewer’s position or status. The Academy Report Review Committee, composed of members of the Academy, oversees the report review process and appoints either a monitor and/or coordinator depending on the type of study. Liaisons are appointed from the Academy’s membership to the major study unit for the purpose of suggesting qualified reviewers. The monitor and/or coordinator either participates in the selection of reviewers or checks the list of reviewers for their relevant expertise or particular perspective. Typically six to eight reviewers are appointed, although more are acceptable for a major policy report. According to the Academy’s report review guidelines, the review of a manuscript takes about 10 weeks, on average, from when a report is sent to the reviewers until final approval; however, the time ranges from a few days to many months. The reviewers look at whether or not the report addressed the committee’s charge; findings are supported by the evidence given; exposition of the report is effective; and tone of the report is impartial. All study committee members are given copies of the reviewers’ comments (with the names of the reviewers removed from the comments) in time to prepare or approve a response to the comments. After the comments have been submitted, the monitor and/or coordinator may prepare a brief summary of the key review issues for the study committee. The study committee may provide a written explanation of how each comment was handled, or it may address the key review issues. The monitor and/or coordinator judges the adequacy of the committee’s responses and may require a resubmission to the reviewers. The Academy’s procedures state that no report is to be released to the project sponsor or the public, and no findings or recommendations are to be disclosed until this review process has been satisfactorily completed. All committee members are contacted to ensure that they approve the report before it is published or released. The Report Review Committee chair provides the final approval of the reports. The Academy is responsible for the report’s dissemination plan. The report sponsor may also be involved in developing the plan. Targeted groups are selected to ensure that the report reaches all appropriate audiences. The report may also be made available via the National Academy Press web site. Briefings are often arranged for interested groups, and reports may become topics of future Academy workshops or symposia. The Academy developed a web site for current project information to increase public access as a result of section 15, added by the Federal Advisory Committee Act Amendments. However, we found that this information is not always posted in a timely manner and is sometimes incomplete. Among other things, section 15 generally requires the Academy to make names and brief biographies of committee members public, post notice of open meetings, make available written materials presented to the committee, post summaries of meetings that are not data-gathering meetings, make copies of the final committee report available to the public, and make available the names of the principal non-Academy reviewers of the draft report. The committee members’ names and biographies, notice of open meetings, and summary minutes of closed meetings are available on the web site of current projects. Copies of reports, which include the names of the external reviewers of the reports, are available on the National Academy Press web site. According to Academy officials, written materials presented to the committees by individuals who are not agents, officials, or employees of the Academy are available for inspection at the Academy’s public reading rooms in Washington, D.C. We reviewed a sample of the 331 current projects to determine whether the database included the names of the committee members. Five of the 30 projects that we reviewed were not required by the act or by the Academy to post committee membership for various reasons. We found that 24 of the 25 projects had the names of the members available on the web site. Five projects had only the names of the members and no biographical statements. However, these five committees were not required to post biographies because the committees were created prior to the act. The Academy’s guidelines state that the summary minutes for closed meetings should be posted to the web site, preferably within 10 business days of the meeting. In order to determine whether this requirement was met by the Academy, we reviewed data on the closed meetings for the 202 committees that held meetings from December 17, 1997, through June 17, 1998. As previously stated, these committees held a total of 353 meetings, with 300 of those meetings having some portion closed. We found that 270 (or 90 percent) had the minutes of the closed sessions on the web site.The minutes of these closed sessions had an average posting time of 13.5 calendar days, within the Academy’s guidelines of 10 business days. However, the amount of time to post the minutes ranged from 0 to 124 calendar days, with 26 percent of the minutes posted 15 or more days after the meeting. At the time of our audit, spot checks of information posted on the web site were conducted at least once a week for missing or improper information. However, we found that for a total of 63 out of 331 current committees (about 19 percent) there were chronological or typographical errors or missing data in the information provided on one or more of the meetings. For example, the listings of the meetings for three projects were out of order. One meeting had two different dates listed on the project web site. For 34 projects, the agenda or summary minutes were not posted. The Academy has already taken action to correct this information or has adequately explained these specific problems. In addition, since we conducted our audit, the Academy created a records officer position responsible for checking the timeliness and accuracy of data on a daily basis. Through the web site, the Academy also elicits public comments about committee composition. The public is allowed 20 calendar days to comment about the proposed committee members and/or suggest new members. Since the web site’s inception in December 1997 through June 1998, the Academy received a total of 120 comments. Only 13 of those comments concerned committee composition—all concerning four committees: those on smokeless and black powder, illegal drug policy, repetitive motion and muscular disorders, and cancer research among minorities. Of these comments, six included suggestions for additional committee members, three provided general or positive comments about committee membership, three included negative comments regarding specific committee members (one of the three members later was removed from consideration), and two comments discussed the length of the public comment period. Prior to the passage of the Federal Advisory Committee Act Amendments, the Academy had efforts under way to increase public access to and participation in the Academy’s committee work. After the amendments were passed, the Academy’s web site of current projects increased public access to project information. However, the Academy had to quickly create and operationalize its web site of current projects in December 1997 and additional enhancements are under consideration pursuant to suggestions received from the public. Thus, it will be some time before an assessment can be made of the extent to which the general public uses the web site. Regarding the untimely posting of data and incomplete data, the Academy’s new procedures should address our concerns. However, the availability of timely information on current projects depends on the effective implementation of the new procedures. We provided a draft of this report to the National Academy of Sciences and GSA for their review and comment. In general, the Academy said that the report was accurate and balanced. Regarding our finding that the Academy’s data available on the web site are not always timely or complete, the Academy believed that it was important to note that in no case was there a violation of the requirements of section 15. We agree. Since section 15 does not provide a time frame for posting summaries of closed meetings, we noted instances in which data were untimely by the Academy’s own guidelines and instances in which the information provided had some errors. The full text of the Academy’s comments appears in appendix I. GSA had no comments on the report. To determine why the Academy sought relief from the act, we interviewed Academy officials and reviewed their statements to the Congress. We also talked with several committee members to obtain their views on the act—the Academy selected the committee members, with input from us. Each Academy study unit and the Presidents of the National Academy of Sciences, the National Academy of Engineering, and the Institute of Medicine selected members to respond to our questions. The Academy narrowed this sample, and each candidate was asked whether he or she would participate in the survey. The sample included past and current committee members and chairs of committees from across the country and from private industry, academia, and not-for-profit institutions. To identify the Academy’s procedures for providing advice to the federal government, we interviewed Academy officials. We also reviewed the Academy’s internal documents outlining the procedures, the treasurer’s reports, and annual reports. To determine whether the Academy had implemented section 15, we interviewed Academy officials and reviewed official documents. We also reviewed the Academy’s web site information, including committee meeting agendas for both open and closed portions of meetings and the content of the closed meetings as described in summary minutes, for Academy projects that were active as of June 17, 1998. To make this determination, we calculated the hours of open and closed meetings, calculated the time in which summary minutes were posted for closed meetings, and categorized the reasons for closed meetings. Each step was verified for accuracy and completeness. Only meetings that occurred in the 6-month period from December 17, 1997, to June 17, 1998, were analyzed. Of the 331 current Academy projects, 69 had no meetings within the stated 6-month time frame, and 24 had no meetings whatsoever. Thirty-six projects were standing committees that were not subject to section 15 and were therefore excluded from our analyses. None of the current project information from the web site was independently verified against the Academy’s original records. For the analysis of open versus closed hours, we considered only the 139 meetings with both open and closed hours. For the closed meetings, we looked only at those meetings with summary minutes or with posted agendas. Of the 300 possible meetings with some closed sessions, 294 were analyzed to determine the reasons for the closed sessions. To measure the Academy’s compliance with the section 15 requirement to make committee members’ names and biographies available for public comment, we reviewed a random sample of 30 current projects’ potential bias and conflict-of-interest forms to determine whether they were present in the Academy’s files and signed by the committee members. We compared the Academy’s files to the committee’s printed lists from the Academy’s current projects web site. Projects that did not have meetings within the December 17, 1997, to June 17, 1998, time frame were not sampled. We conducted our work from May through November 1998 in accordance with generally accepted government auditing standards. As arranged with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report for 10 days. At that time, we will send copies of this report to the President of the National Academy of Sciences and the Administrator of the General Services Administration. We will also make copies available to others on request. Please call me at (202) 512-3841 if you or your staff have any questions concerning this report. Major contributors to this report were Diane B. Raynes, Gregory M. Hanna, Lynn M. Musser, and Robin M. Nazzaro. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the committee process at the National Academy of Sciences, focusing on the: (1) reasons the Academy sought relief from the Federal Advisory Committee Act; (2) Academy's committee procedures for providing advice to the federal government; and (3) Academy's implementation of the new requirements for providing information to the public. GAO noted that: (1) according to Academy officials, the Academy sought relief from the act for a number of reasons; (2) central to its concerns was the Academy's ability to maintain sole authority in appointing committee members and to conduct its work independently from sponsoring agencies' influence; (3) in addition, the Academy opposed opening deliberative meetings on the grounds that such an action could stifle open debate and could impact the Academy's ability to recruit committee members; (4) finally, the Academy was concerned about the amount of time and expense to perform the administrative requirements of the act, which could render the Academy unresponsive to the government; (5) prior to the enactment of the amendments, the Academy developed a number of procedures governing its committees' activities, including project formulation, committee selection, committee work, report review, and the release and dissemination of reports; (6) according to Academy officials, these procedures are intended to help ensure the integrity of advice provided to the federal government; (7) for example, committee selection includes procedures for identifying conflicts of interest and potential bias of committee members; (8) the committee work phase provides an opportunity for some public participation, and committee reports are reviewed by an Academy review committee before they are released to the sponsoring agency and the public; (9) in response to section 15, the Academy developed a web site to increase public access to current project information, however, GAO found that some descriptive information on current projects was not always posted in a timely manner and was not always complete; and (10) during this audit, the Academy addressed these problems and developed additional written guidelines regarding the posting of committee information as well as additional quality assurance procedures. |
The Budget and Accounting Procedures Act of 1950 and the law commonly known as the Federal Managers’ Financial Integrity Act of 1982 (FMFIA) placed primary responsibility for establishing and maintaining internal control on the head of the agency. Internal control is an integral component of an organization’s management that when properly implemented and operating effectively provides reasonable assurance that the following objectives are being achieved: (1) effectiveness and efficiency of operations; (2) reliability of financial reporting; and (3) compliance with laws and regulations. Within this broad framework of internal control, DOD must design and implement effective funds control, payment controls, and internal control over financial reporting. Auditors of DOD’s financial statements are to assess the effectiveness of these controls as part of the financial statement audit. However, DOD has acknowledged that long-standing weaknesses in its internal controls, its business systems, and its processes have prevented auditors from determining the reliability of DOD’s financial statement information, including the budgetary information included in DOD’s SBR. Moreover, we have previously reported that a weak overall control environment and poor internal controls limit DOD’s ability to prevent and detect fraud, waste, abuse, and improper payments. Because budgetary information is widely and regularly used for management, the DOD Comptroller designated as one of DOD’s highest priorities the improvement of its budgetary information and processes underlying the SBR. The financial information in the SBR is predominantly derived from an entity’s budgetary accounts, which are used by agencies to account for and track the use of public funds, in accordance with budgetary accounting rules. The SBR is designed to provide information on authorized budgeted spending authority and links to the Budget of the United States Government (President’s Budget), including the source and availability of budgetary resources, and how obligated resources have been used. According to the Office of Management and Budget, the SBR was added as a basic federal financial statement so that the underlying budgetary accounting information is audited and is, therefore, more reliable for routine management use and budgetary reporting, such as the President’s Budget. In the FIAR Plan, DOD states that it expects to obtain five benefits from its planned efforts to achieve an auditable SBR. According to DOD, its efforts will improve the visibility of budgetary transactions, ensuring a more effective use of resources; provide operational efficiencies through more readily available and accurate cost and financial information; improve financial stewardship through reduced improper payments; improve budget processes and controls, thus reducing violations of funds control laws; and link execution to the President’s Budget, thus providing more consistency with the financial environment. For years, GAO and DOD IG have reported on DOD’s inability to provide effective funds control and report reliable financial information, including budgetary information. In 2008, we reported that DOD’s complex and inefficient payment processes, nonintegrated business systems, and weak internal controls impair its ability to maintain proper funds control, putting DOD at risk of overobligating or overspending its appropriations. Specifically, DOD’s weak internal control environment has hindered its ability to ensure that transactions are accurately recorded, sufficiently supported, and properly executed by trained personnel subject to effective supervision. Further, these weaknesses impair DOD’s ability to ensure that amounts recorded as disbursements are matched to the corresponding recorded obligations, resulting in “unmatched disbursements.” These and other weaknesses have prevented DOD from reporting reliable financial information, including budgetary information in an auditable SBR, which DOD’s FIAR Plan seeks to address through a multiyear effort across the military services and defense agencies. For example, we recently reported that inadequate processes, systems controls, and controls for accounting and reporting prevented the Marine Corps from passing an audit of its fiscal year 2010 SBR, the first SBR of a military service that DOD is attempting to successfully audit since the SBR was first required in 1998. Although DOD has dedicated significant resources to improving its financial management, including addressing known weaknesses in its funds control, neither the department nor its auditors have been able to verify that weaknesses have been sufficiently corrected in order to pass an audit. These weaknesses present challenges for DOD in: (1) reducing its risk of overobligating and overexpending its appropriations in violation of the law and making effective use of budgetary resources; (2) improving its ability to eliminate unmatched disbursements and other significant problem disbursements; and (3) producing reliable budgetary information. We have reported that the department is at risk of overobligating and overexpending its appropriations because of its weaknesses in identifying and training its personnel who are responsible for funds control and carrying out supervisory duties, its challenges in properly supporting and accounting for its transactions, and its poor financial systems. These weaknesses have contributed to 64 DOD-reported instances of overobligation or overexpenditure of funds in violation of the law totaling $927.4 million from fiscal year 2007 through September 15, 2011. However, there may be other violations that may not be detected, investigated, and reported because of the weaknesses in DOD’s funds control and financial management overall. According to DOD, the most frequent causes of DOD’s overobligations and overexpenditures include inadequate internal controls and standard operating procedures, not following prescribed internal controls and standard operating procedures, lack of appropriate training, and inadequate supervisory involvement or oversight. Examples of reported weaknesses in DOD’s funds control include: Inadequately trained funds control personnel. In 2008, we reported that DOD had not effectively identified and established training programs for departmental personnel who carry out DOD’s funds control. According to DOD, its funds control system relies extensively on the department’s ability to (1) identify individuals who are performing key funds control roles, such as certifying officers, contracting officers, program managers, funds certifying officials, and other departmental accountable officials, who incur obligations and make disbursements and perform related duties, and (2) ensure that those individuals have received the training necessary to fulfill their responsibilities in compliance with the DOD Financial Management Regulation (FMR). We made recommendations to DOD in our report to improve its process and system of identifying and training its key funds control personnel, which DOD agreed to implement, and last year DOD revised the policies in its FMR on this aspect of its funds control. We have not assessed the effectiveness of DOD’s actions. However, as I testified before this Panel in July 2011, DOD has not completed a competency analysis of its financial management personnel and still has significant work to do to address this challenge to achieving its financial improvement goals. Unsupported transactions. We have reported that DOD components have significant weaknesses in their ability to properly support transactions in order to reliably determine whether their obligations and disbursements are being used for authorized purposes and within the amounts and time frames established by law. For example, we recently reported that the auditors who attempted to audit the Marine Corps fiscal year 2010 SBR were unable to conduct the audit because, among other internal control deficiencies, the Marine Corps lacked documentation to support its transactions, which put the Marine Corps at risk of not being able to verify whether payments were made in the appropriate amount for authorized purposes, and to the appropriate parties. In its Agency Financial Report for Fiscal Year 2010, DOD officials stated that one of 13 material weaknesses that prevent an audit of its financial statements will be resolved by 2017 by implementing processes and systems that can provide necessary transaction-level supporting documentation for its disbursements and collections. Inadequate recording of transactions. DOD faces challenges in properly recording its obligations and disbursements in its accounting and other business systems that impair its ability to track and control the use of public funds. According to DOD’s FMR, obligations and expenditures are required to be recorded accurately and promptly, even if the recording results in a negative amount in the appropriation, fund, or other accounting level. Last week, we reported that the auditors of the Marine Corps’ fiscal year 2010 SBR found that the Marine Corps inappropriately used “bulk obligations” to record estimated liabilities that the Marine Corps did not match to actual payments due to weak internal controls. As discussed below, a similar practice by the military departments led to overobligations in violation of the law. Further, DOD reported in its Agency Financial Report for Fiscal Year 2010 that another of 13 material weaknesses that prevented an audit of its financial statements relates, in part, to the department’s inability to properly record payments due from other agencies and the public. Ineffective business systems. In our 2008 report on DOD’s funds control, we found that DOD’s nonintegrated and outdated business systems, including its financial systems and other systems that provide most of DOD’s financial data to the financial systems, were a key impediment to effective funds control, and we noted that DOD had long-term plans to implement modernized, fully integrated, and reliable business systems. However, as I stated before this panel in July, DOD faces significant challenges in its effort to implement these new systems over the next several years. In its Agency Financial Report for Fiscal Year 2010 and its FIAR Plan, DOD acknowledges the challenges related to weaknesses in DOD’s financial management systems. For example, in DOD’s annual statement on the status of its internal controls included in its Agency Financial Report, DOD reported that the department is not in conformance with internal control requirements because of a material weakness in its financial management systems. DOD’s FIAR Plan states that implementing modernized, effective, and integrated business systems that reliably support financial needs of the department are critical to achieving the department’s financial improvement and audit readiness efforts. DOD’s ineffective funds control has resulted in overobligations and overexpenditures in violation of the Antideficiency Act (ADA). As we reported in 2008, weaknesses in DOD’s funds control impaired its ability to accurately detect, investigate, and report such violations. Under the ADA, agencies are prohibited from, among other things, incurring obligations or making expenditures in excess or in advance of appropriations or in excess of apportionments or formal subdivisions of those apportionments. When DOD determines that a violation of the ADA has occurred, the department is to immediately report to the President and Congress all relevant facts and a statement of actions taken and submit a copy to the Comptroller General at the same time. According to copies of ADA violation reports received by the Comptroller General, and as shown in table 1, DOD reported 64 ADA violations from fiscal year 2007 through September 15, 2011, with a total dollar amount of $927.4 million. However, due to DOD’s weaknesses in its funds control process, including the weaknesses described above related to DOD’s challenges in controlling and recording obligations and disbursements and detecting violations, this listing may not be complete because all ADA violations may not have been identified or reported. For example, GAO identified a violation in June 2010 involving the Army’s overobligation of its fiscal year 2008 Military Personnel–Army (MPA) appropriation, as evidenced by a $200 million transfer DOD made to the MPA account from DOD’s working capital fund, which has not yet been reported by DOD. Mr. Chairman, Ranking Member Andrews, I know that you and other members of Congress recently sent a letter to the DOD Comptroller asking for an explanation of why DOD has not reported this and other potential ADA violations. Such an explanation could provide greater transparency over the accuracy of reported numbers and amounts of violations. Because the ADA prohibits, and effective funds control should prevent, overobligations and overexpenditures of public funds, the number and dollar amount of ADA violations are an indicator of the status of DOD’s funds control. However, the nature of reported violations can also indicate systemic weaknesses in DOD’s funds control. The following ADA violations involved systemic breakdowns in the controls necessary to track actual amounts of obligations incurred against amounts of available funding: As noted above, we found in June 2010 that the Army Budget Office lacked an adequate funds control process to provide it with ongoing assurance that obligations and expenditures do not exceed funds available in the fiscal year 2008 Military Personnel–Army (MPA) appropriation. We found that the Army’s total obligations against the fiscal year 2008 MPA appropriation exceeded the amount available in the account, as evidenced by the Army’s need to transfer $200 million from the Defense Working Capital Fund, Army appropriation to cover the shortfall. The overobligation likely stemmed, in part, from lack of communication between the Army budget office and program managers so that the Army budget office’s accounting records reflected estimates instead of actual amounts until it was too late to control the incurrence of excessive obligations in violation of the act. Thus, at any given time in the fiscal year, the Army budget office did not know the actual obligation and expenditure levels of the account. The Army budget office explained that it relies on estimated obligations—despite the availability of actual data from program managers—because of inadequate financial management systems. Similarly, in 2008, Navy officials reported an ADA violation in the Military Personnel–Navy (MPN) appropriation in the amount of $183 million. The violation occurred when the Bureau of Naval Personnel (BUPERS) overobligated the fiscal year 2008 MPN appropriation due to its inability to accurately track the status of obligations and identify the need for additional funding. To its credit, the department has issued and periodically updated policies that address responsibilities for preventing and identifying ADA violations. DOD’s guidance also describes frequent causes of violations within the department and explains the actions necessary to avoid them, including emphasizing management and supervisory duties, training of key funds control personnel, and effective systems and procedures. Basic controls to match payments with the obligation records and account for and reconcile payments are not effective within the department. DOD has identified payment transactions and related accounting steps as “problem disbursements” and monitors them through management tracking reports as it attempts to correct them. Problem disbursements include unmatched disbursements (UMD) that represent disbursements that have been paid by an accounting office but that have not been matched to the correct obligation records. For example, if one or more of the accounting line elements for each transaction, such as appropriation, fiscal year, and program code do not match the information in the accounting records, then the transaction is considered unmatched. For a description of two examples of DOD’s problem disbursements, see appendix II. Problem disbursements increase the risk of making fraudulent or erroneous payments without detection. In addition, problem disbursements impair the reliability of DOD financial statements and DOD’s ability to control its disbursements, a key aspect of funds control. According to DOD’s tracking reports, the department has made progress in addressing problem disbursements, but the department has not achieved its goals in this area. As we reported in 2003, the Defense Finance and Accounting Service (DFAS) expanded its use of existing financial management performance metrics to include special measures for the recording of payments, including the amount of disbursements that are not matched to the corresponding obligations, or UMDs. DOD, in its May 2011 FIAR Plan Status Report on the implementation of its FIAR Plan, included a metric on UMDs. This metric tracks UMDs that are over 120 days old, which DOD refers to as “overaged UMDs.” As stated in that report, DOD’s goal is to have no UMD amounts greater than 120 days old. According to the report, the benefit of reducing UMDs, especially overaged UMDs, greater accuracy of DOD components’ account balances on management reports and the SBR. Reduction of the amount of UMDs will allow DOD to have more accurate information about the obligations that have been liquidated, improving its budgetary accounting. The presence of UMDs prevents the department from having accurate information about the amount of funds available for obligation and expenditure to carry out its mission, thus increasing the risk of possible ADA violations. The following table appears in the May 2011 status report on the FIAR Plan for overaged UMDs and indicates that, from the second quarter of fiscal year is 2009 through the second quarter of fiscal year 2011, DOD is making progress at reducing overaged UMDs: In the results section accompanying this table, DOD officials noted Army’s UMDs reportedly increased due to systems issues with recording obligations and lines of accounting in its Enterprise Resource Planning (ERP) systems. As we and DOD’s auditors have reported, DOD’s funds control and related internal control weaknesses and problem disbursements have impaired its ability to produce reliable financial information for reporting, especially the reliability of the department’s SBR, as well as its other budgetary information. For example, we reported in 1999 that the reliability of DOD’s budgetary information reported in its SBR was impaired. In 2009, the DOD Comptroller directed that the department’s components focus their efforts on budgetary information and the ability to prepare an auditable SBR as one of two first priorities that are now being implemented through the DOD’s FIAR Plan, its FIAR Guidance, and the components’ individual financial improvement plans. As a pilot, DOD designated the Marine Corps SBR as the first military service SBR to undergo an audit. However, as we reported last week, the Marine Corps was unable to undergo an audit of its fiscal year 2010 SBR due to serious control weaknesses that prevented the auditors from performing the audit. Although we found that the Marine Corps was able to address some of these weaknesses, many remained unresolved. We found that the Marine Corps did not develop an effective overall corrective action plan to address the 70 audit findings and related 139 recommendations that identified risks, prioritized actions, and identified required resources needed to help ensure that actions adequately respond to recommendations. Instead, its approach to addressing auditor findings and recommendations for its prior and current audit efforts focuses on short-term corrective actions necessary to support heroic efforts to produce reliable financial reporting at year-end. Such approach may not result in sustained improvements over the long term that would help ensure that the Marine Corps could routinely produce sound data on a timely basis for decision making and reporting. We also reported key lessons learned from this pilot that, if effectively shared with the other military services, could help them to address similar known challenges in preparing reliable SBRs. The SBR is designed to provide information on budgeted spending authority reported in the President’s Budget, including budgetary resources, availability of budgetary resources, and how obligated resources have been used. Both Congress and the administration use this information to make decisions about the amounts of appropriations DOD needs to carry out its operations. However, as we stated in our February 2011 High-Risk Series: An Update, DOD’s pervasive control weaknesses adversely affect DOD’s ability to, among other things, anticipate future costs and claims on the budget. DOD, in its Agency Financial Report for Fiscal Year 2010, reported that it made an estimated $1 billion in improper payments under five of its programs. However, this estimate is incomplete because DOD did not include estimates from its commercial payment programs, which account for approximately one-third of the value of DOD payments. Further, both we and the DOD IG have reported on weaknesses in DOD’s payment controls, including weaknesses in its process for assessing the risk of improper payments and reporting estimated amounts of them. DOD’s payment controls are hindered by problems related to inadequate payment processing, poor financial systems, and inadequate supporting documentation. In our February 2011 High-Risk Series: An Update, we identified various DOD high-risk areas, including contract management (designated in 1992) and financial management (designated in 1995), that we have previously reported make the department vulnerable to improper payments. DOD’s contract management weaknesses, such as ineffective oversight, increase the risk that DOD will pay more than the value of the goods delivered or services performed. Financial management deficiencies have adversely affected the department’s ability to control costs, to ensure basic accountability, and to prevent and detect fraud, waste, and abuse, and represent a significant obstacle to achieving an unqualified opinion on DOD’s and the U.S. government’s consolidated financial statements. In addition, the DOD IG recently reported their assessment that DOD’s risk of making improper payments is high. This assessment was based on control deficiencies identified by the Defense Finance and Accounting Service (DFAS) as well as prior assessments made by GAO and DOD IG. Our prior work and reports issued by DOD IG have highlighted the department’s long-standing and significant problems with estimating and preventing improper payments. Specific weaknesses in DOD’s payment controls include inadequate payment processing, inadequate supporting documentation for expenditures, financial system deficiencies, and weak contract audit and payment controls. For example: Inadequate payment processing. The DOD IG reported that the U.S. Marine Corps Forces Special Operations Command did not have effective controls over the reporting and processing of baseline and contingency funds, resulting in improper payments. Specifically, the DOD IG reported that the command did not have effective controls over the recording and processing of 35,699 transactions. Of the 320 sample transactions, 245 had one or more deficiencies. In addition, of the 29 travel vouchers with deficiencies or unsupported expenses, the payments made on 10 vouchers were improper payments. According to the DOD IG report, the improper payments occurred because the certifying officers and departmental accountable officials approved the travel vouchers with deficiencies and unsupported expenses without thoroughly reviewing them. Inadequate documentation. As we reported last week, we continue to find that the Navy and Marine Corps have issues with maintaining adequate documentation for their transactions. On the basis of the sample of items we tested for an ongoing audit, the Navy did not maintain adequate documentation for us to independently validate its efforts to research and resolve differences between its Fund Balance with Treasury balances with the records of the Department of the Treasury, which is a process similar to reconciling a checkbook with a bank statement. Some payments are considered improper payments due to insufficient or missing documentation. In July 2011, the DOD IG reported that DFAS made potentially improper payments of $4.2 million from January 2005 through December 2009 related to active duty military personnel. According to the report, DOD did not ensure that the Defense Joint Military Pay System–Active Component contained only valid active-duty military accounts. For example, the DOD IG found that this system contained military personnel that received payments after their reported date of death. Financial system deficiencies. In 2009, we reported that DOD traced the root cause of many improper payments in its military and civilian pay to the inaccurate or untimely reporting of entitlement data to DOD’s automated systems on such areas as time and attendance, personnel actions, and pay allowances. We reported that DOD had described steps to monitor and track these improper payments; however, it was unclear whether these actions would address the root causes of these deficiencies. In August 2011, the DOD IG reported that the Army’s controls over its Deployable Disbursing System (DDS) payments were inadequate and resulted in, among other things, improper payments. The DOD IG found that the Army was at risk of improper payments because its Financial Management Centers did not effectively review user access to DDS or oversee the payment process. The DOD IG reported that the Army’s disbursing personnel made nine duplicate payments to vendors and did not collect on these improper payments. Two of the duplicate payments were referred by the DOD IG to the Defense Criminal Investigative Service because of the suspicious and potentially fraudulent nature of the payments. Weak contract audit and payment controls. As we testified in February 2011, our 2009 audit work identified, among other weaknesses in DOD’s contract payment controls, weaknesses in contract auditing, which increase the risk of improper payments. In 2009, we reported on audit quality problems at Defense Contract Audit Agency (DCAA) offices nationwide, including compromise of auditor independence, insufficient audit testing, and inadequate planning and supervision. In addition, DCAA’s management environment and quality assurance structure were based on a production-oriented mission that put DCAA in the role of facilitating DOD contracting without also protecting the public interest. At that time, we found serious quality problems in the 69 audits and cost-related assignments we reviewed, resulting in DCAA rescinding over 80 audit reports and removing over 200 DOD contractors from direct billing privileges, which allow them to submit invoices for payment without review by the government. The Improper Payments Information Act of 2002 (IPIA) requires DOD to annually identify programs and activities susceptible to significant improper payments, estimate amounts improperly paid under those programs and activities, and report on these estimates and the actions to reduce improper payments. In July 2009, we reported that DOD did not conduct risk assessments on all of its payment activities, as $322 billion in agency outlays were excluded from the amounts DOD assessed. While DOD components conducted risk assessments for six payment activities totaling about $493 billion in fiscal year 2007, we identified an additional $322 billion in outlays reported in DOD’s SBR that had not been assessed. Also, the DOD IG recently reported that DOD’s First Quarter FY 2010 High Dollar Overpayments Report (Overpayments Report) did not accurately portray the department’s risk of high-dollar overpayments. The DOD IG reported that the Overpayments Report was incomplete because not all DOD payments were examined. DFAS reviews for high dollar overpayments excluded approximately $167.5 billion or 55 percent of DOD’s total $303.7 gross outlays. DOD’s inability to identify and reconcile total payments to its SBR affected the reliability and completeness of its estimates for and reviews of improper payments. GAO-09-442. could leverage the results from its existing Recovery Auditing Act processes identifying actual commercial under- and overpayments to develop its statistical sampling methodology and enhance the reported estimate. The DOD Comptroller testified in May 2011 that DOD had not estimated the amount of improper payments for commercial pay because the department uses prepayment screening, both automated and manual, to prevent improper payments. He added that one especially important tool to prevent commercial pay improper payments is the department’s Business Activity Monitoring (BAM) software program introduced in August 2008. However, the DOD IG reported, among other things, that the BAM tool had a false positive rate of more than 95 percent and that the BAM review methodology was not standardized across payment systems or even within the same office. The large number of payments flagged for review (false positives) made it difficult to conduct the appropriate research in a timely manner without delaying payment. The IG reported that the lack of a standardized methodology could lead to DFAS not detecting and preventing improper payments due to poor quality review. The Comptroller stated, in his May 2011 testimony, that in view of legislative changes and more recent OMB guidance, DOD plans to do postpayment statistical sampling for commercial payments for those systems not currently covered by the BAM tool to supplement its prepayment measures. Although DOD has dedicated significant resources under its FIAR Plan to remediate its identified financial management weaknesses, it faces significant challenges in addressing those persistent weaknesses. DOD’s large number of nonintegrated business systems, complex and inefficient payment processes, and weak internal controls put the department at risk of overobligating or overspending its appropriations. DOD has been addressing its problem disbursements, but they are a contributing factor to the department’s funds control issues. The department’s weak controls over payments increase the risk of inaccurate cost information and improper payments. Given DOD’s stated goal of achieving audit readiness on its consolidated financial statements by the end of fiscal year 2017, it will be critical that the department continue to ensure that steady progress is being made. Moreover, for DOD to move forward, it will be important for the department to resolve its problems with multiple, disparate nonintegrated systems and to ensure that whatever systems solutions are chosen will provide the underlying foundation for auditable financial statements. Mr. Chairman and members of the panel, this concludes my prepared statement. I would be pleased to respond to any questions that you or other members of the panel may have at this time. For further information regarding this testimony, please contact Asif A. Khan, (202) 512-9869 or khana@gao.gov. Key contributors to this testimony include F. Abe Dymond, Assistant Director; Daniel Egan; Maxine Hattery; Robert Sharpe; and Sandra Silzer. Congress has long recognized the importance of internal control, beginning with the Budget and Accounting Procedures Act of 1950, over 60 years ago. The 1950 act placed primary responsibility for establishing and maintaining internal control squarely on the shoulders of agency management. In 1982, Congress enacted the law commonly known as the Federal Managers’ Financial Integrity Act (FMFIA), and the Office of Management and Budget (OMB) issued Circular No. A-123 to require each agency to establish and maintain internal control systems that would enable obligations and costs to be recorded in compliance with applicable law; funds, property, and other assets to be safeguarded; and revenues and expenditures applicable to agency operations to be properly recorded and accounted for. Within this broad framework of internal control required by FMFIA, the Department of Defense, like other executive- branch agencies, must also design and implement effective systems of funds control, payment controls, and internal control over financial reporting. Auditors of DOD’s financial statements assess the effectiveness of these four types of internal controls in varying degrees as part of the financial statement audit. Further, one financial statement, the Statement of Budgetary Resources, was designed for the purpose of reporting on agencies’ use of federal funds and to subject agencies’ funds control to audit. Listed below is a brief description of the four types of controls. Internal control represents an organization’s plans, methods, and procedures used to meet its missions, goals, and objectives and serves as the first line of defense in safeguarding assets and preventing and detecting errors, fraud, waste, abuse, and mismanagement. Internal control is to provide reasonable assurance that an organization’s objectives are achieved through (1) effective and efficient operations, (2) reliable financial reporting, and (3) compliance with laws and regulations. Safeguarding of assets is a subset of all these objectives. The purpose of funds control is to implement controls that restrict both obligations and disbursements from exceeding appropriations and supporting the proper preparation and execution of the budget. Funds control systems must be able to accurately record obligations, collections, and disbursements against appropriations and the accounts established to track the status of appropriations. An agency’s fund control system is the primary tool for ensuring that the agency complies with congressional spending mandates, and is, therefore, central to Congress’s ability to exercise its constitutional power of the public purse. In the executive branch of the federal government, funds control requirements are implemented by executive agencies consistent with policies and guidance issued by OMB, the Department of the Treasury (Treasury), and the head of each executive agency. According to OMB Circular No. A-11, proper funds control should include the following elements: agency regulations that are required by, and designed to ensure compliance with the prohibitions contained in, the Antideficiency Act (ADA), which are described below; the purpose of funds control is to implement controls that restrict both obligations and expenditures from exceeding appropriations and related administrative accounts, as well as hold officers and employees accountable when they violate the restrictions; and the funds control systems must operate within the internal control systems, including the objective of complying with laws and regulations. The ADA prohibits federal officers and employees from making or authorizing an expenditure from, or creating or authorizing an obligation under, any appropriation or fund in excess of the amount available in the appropriation or fund unless authorized by law, 31 U.S.C. § 1341(a)(1)(A); involving the government in any obligation to pay money before funds have been appropriated for that purpose, unless otherwise allowed by law, 31 U.S.C. § 1341(a)(1)(B); accepting voluntary services for the United States, or employing personal services not authorized by law, except in cases of emergency involving the safety of human life or the protection of property, 31 U.S.C. § 1342; and making obligations or expenditures in excess of an apportionment or reapportionment, or in excess of the amount permitted by agency regulations, 31 U.S.C. § 1517(a). Once it is determined that there has been a violation, the agency head “shall report immediately to the President and Congress all relevant facts and a statement of actions taken,” and they shall transmit a copy to the Comptroller General at the same time. OMB has issued further instructions on preparing the reports, which may be found in OMB Circular No. A-11, Preparation, Submission, and Execution of the Budget, § 145. Internal control over financial reporting should assure the safeguarding of assets from waste, loss, unauthorized use, or misappropriation as well as assure compliance with laws and regulations pertaining to financial reporting. Financial reporting includes annual financial statements of an agency as well as other significant internal or external financial reports. Other significant financial reports are defined as any financial reports that could have a material effect on a significant spending, budgetary, or other financial decision of the agency or that is used to determine compliance with laws and regulations on the part of the agency. An agency needs to determine the scope of financial reports that are significant, that is, which reports are included in the assessment of internal control over financial reporting. In addition to the annual financial statements, significant reports might include: quarterly financial statements; financial statements at the operating division or program level; budget execution reports; reports used to monitor specific activities such as specific revenues, receivables, or liabilities; reports used to monitor compliance with laws and regulations such as the Anti-Deficiency Act. Payment controls, as a discrete subset of internal controls and funds control, establish an effective system of internal controls needed to maintain accountability over resources, including identifying, reporting, and reducing improper payments and problem disbursements, and recouping improper payments when they are made. Controls should ensure payments and collections are timely and accurate and that public funds are used properly for the payments. Managers are responsible for ensuring that internal controls are established and functioning properly. Managers with responsibilities for determining entitlement, authorizing and executing payments and collections shall create, document, and maintain an organizational structure and business processes that appropriately segregates assigned duties, emphasizes adherence to policies and procedures, and employs sound internal accounting and system access controls; implement finance and accounting systems that comply with the federal financial management systems requirements, keep disbursement (entitlement), and accounting records accurate and in balance from contract execution through closeout, and monitor the causes of late payments and interest penalties incurred; establish systematic controls that capture adequate audit trails to allow the tracing from source documents of financial events to general ledger account balances through successive levels of summarization and financial reports/statements; ensure data is processed using accurate coding and errors are employ systems that ensure the authenticity of data that are electronically transmitted, including the electronic signature and ensure controls provide reasonable assurance that deliberate or inadvertent manipulation, modification, or loss of data during transmission is detected; and validate cash management and payment performance quality and effectiveness on an annual basis: and periodically test effectiveness of internal controls, document results of testing, and take necessary corrective actions. ppendix II: DOD “Problem Disbursements” The Department of Defense’s (DOD) disbursement posting policy is in Chapter 11 of Volume 3 of its Financial Management Regulation (FMR). According to Chapter 11, DOD’s policy is that a disbursement be matched to its corresponding, detail-level obligation and be recorded as promptly as current systems and business practices reasonably permit. DOD recognizes that while most obligations and disbursements are matched automatically, some obligations and disbursements are required to be manually matched, mainly due to nonautomated processes or the rejection of transactions by automated systems. As defined by DOD, problem disbursements include unmatched disbursements and negative unliquidated obligations. The definitions for these terms are also in Volume 3, Chapter 11 of the DOD FMR. An unmatched disbursement is defined as a disbursement transaction that has been received and accepted by an accounting office, but has not been matched to the correct detail obligation. This includes transactions that have been rejected back to the paying office or central disbursement clearing organization by an accounting office. A negative unliquidated obligation is a disbursement transaction that has been matched to a cited detail obligation (unlike unmatched disbursements), but the total recorded disbursement(s) exceed the recorded obligation. Chapter 11 also prescribes the requirements for researching UMDs and NULOs. For example, prevalidation is defined as a procedure that requires a proposed payment be identified/matched to its applicable proper supporting obligation that has been recorded in the official accounting system and that the line(s) of accounting cited on the payment match the data recorded in the accounting system. As stated in Chapter 11, prevalidation procedures help better ensure that contracts are not overpaid. DOD Financial Management: Ongoing Challenges in Implementing the Financial Improvement and Audit Readiness Plan. GAO-11-932T. Washington D.C.: September 15, 2011. DOD Financial Management: Marine Corps Statement of Budgetary Resources Audit Results and Lessons Learned. GAO-11-830. Washington, D.C.: September 15, 2011. DOD Financial Management: Numerous Challenges Must Be addressed to Achieve Auditability. GAO-11-864T. Washington D.C.: July 28, 2011. Improper Payments: Recent Efforts to Address Improper Payments and Remaining Challenges. GAO-11-575. Washington D.C.: April 15, 2011. Contract Audits: Role in Helping Ensure Effective Oversight and Reducing Improper Payments. GAO-11-331T. Washington, D.C.: February 1, 2011. High-Risk Series: An Update. GAO-11-278. Washington, D.C.: February 2011. Defense Health: Management Weaknesses at Defense Centers of Excellence for Psychological Health and Traumatic Brain Injury Require Attention. GAO-11-219. Washington, D.C.: February 28, 2011. DCAA Audits: Widespread Problems with Audit Quality Require Significant Reform. GAO-09-468. Washington, D.C.: September 23, 2009. Improper Payments: Significant Improvements Needed in DOD’s Efforts to Address Improper Payment and Recovery Auditing Requirements. GAO-09-442. Washington, D.C.: July 29, 2009. DOD Financial Management: Improvements Are Needed in Antideficiency Act Controls and Investigations. GAO-08-1063. Washington, D.C.: September 26, 2008. Financial Audit Guide: Auditing the Statement of Budgetary Resources. GAO-02-126G. Washington, D.C.: December 2001. Financial Management: DOD’s Metrics Program Provides Focus for Improving Performance. GAO-03-457. Washington, D.C.: March 28, 2003. Department of Defense: Status of Financial Management Weaknesses and Actions Needed to Correct Continuing Challenges. GAO/T- AIMD/NSIAD-99-171. Washington, D.C.: May 4, 1999. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The Department of Defense (DOD) is required to design and implement effective internal controls, including controls over its use of public funds ("funds controls") and controls over its payment processes ("payment controls"). As a steward of the public's resources, DOD is responsible and accountable for (1) using public funds efficiently and effectively and for the purposes and within the time frames and amounts prescribed by law, (2) making payments to the right parties in the correct amount within allowable time frames and recouping any improper payments, and (3) accurately recording and reporting on its transactions and use of public funds. GAO's testimony focuses on (1) challenges DOD faces in its funds control, and their effect on the reliability of DOD's financial information, especially the budgetary information in DOD's Statement of Budgetary Resources and (2) weaknesses in DOD's payment controls that put the department at risk of making improper payments. This statement is based on our prior work and reports issued by the department's Inspector General (DOD IG). The panel requested that GAO provide its perspective on the status of DOD's process for identifying and reporting on improper payments, examples of Antideficiency Act violations within DOD along with the causes of these violations, and the effect of problem disbursements on DOD's ability to report reliable information on its financial statements. For years, GAO and DOD IG have reported on DOD's inability to provide effective funds control and report reliable financial information, including budgetary information. In 2008, GAO reported that DOD's complex and inefficient payment processes, nonintegrated business systems, and weak internal controls impair its ability to maintain proper funds control, putting DOD at risk of overobligating or overspending its appropriations. Specifically, DOD's weak internal control environment has hindered its ability to ensure that transactions are accurately recorded, sufficiently supported, and properly executed by trained personnel subject to effective supervision. Funds control weaknesses place DOD at risk of violating the Antideficiency Act (ADA), specifically through overobligations and overexpenditures. DOD reported ADA violations from fiscal year 2007 through September 15, 2011, with a total dollar amount of $927.4 million. DOD has identified payment transactions and related accounting steps as "problem disbursements." Problem disbursements include unmatched disbursements (UMD) that represent disbursements that have been paid by an accounting office but that have not been matched to the correct obligation records. DOD reports that it has reduced overaged UMDs from $666.5 million to $109.6 million between second quarter of fiscal year 2009 to the same time in fiscal year 2011. These and other weaknesses have prevented DOD from reporting reliable financial information, including budgetary information in an auditable Statement of Budgetary Resources. Although DOD has dedicated significant resources to remediate its identified weaknesses, it faces significant challenges to address those persistent weaknesses. DOD reported for fiscal year 2010 that it made an estimated $1 billion in improper payments. However, this estimate is incomplete because DOD did not include estimates from its commercial payment programs, which account for approximately one-third of the value of DOD payments. Further, both GAO and the DOD IG have reported on weaknesses in DOD's payment controls, including weaknesses in its process for assessing the risk of improper payments and reporting estimated amounts of them. DOD's problem disbursements continue to be a concern and are a contributing factor to the department's funds control issues. The department's weak controls over payments increase the risk of inaccurate cost information and improper payments. Given DOD's stated goal of achieving audit readiness on its consolidated financial statements by the end of fiscal year 2017, it will be critical that the department continue to ensure that steady progress is being made. Moreover, for DOD to move forward, it will be important that the department resolve its problems with multiple, disparate nonintegrated systems to ensure that whatever systems solutions are chosen will provide the underlying foundation for auditable financial statements. |
Information security is a critical consideration for any organization that depends on information systems and computer networks to carry out its mission or business. It is especially important for government agencies, where maintaining the public’s trust is essential. The dramatic expansion in computer interconnectivity and the rapid increase in the use of the Internet have changed the way our government, the nation, and much of the world communicate and conduct business. However, without proper safeguards, systems are unprotected from individuals and groups with malicious intent to intrude and use the access to obtain sensitive information, commit fraud, disrupt operations, or launch attacks against other computer systems and networks. This concern is well-founded for a number of reasons, including the increase in reports of security incidents, the ease of obtaining and using hacking tools, the steady advance in the sophistication and effectiveness of attack technology, and the dire warnings of new and more destructive attacks to come. Computer- supported federal operations are likewise at risk. Our previous reports and those of agency inspectors general describe persistent information security weaknesses that place a variety of federal operations at risk of disruption, fraud, and inappropriate disclosure. Thus, we have designated information security as a governmentwide high-risk area since 1997, a designation that remains in effect. We have specifically recognized the importance of information security related to critical infrastructures. Critical infrastructures are physical or virtual systems and assets so vital to the nation that their incapacitation or destruction would have a debilitating impact on national and economic security and on public health and safety. These systems and assets—such as the electric power grid, chemical plants, and water treatment facilities—are essential to the operations of the economy and the government. Recent terrorist attacks and threats have underscored the need to protect these critical infrastructures. If their vulnerabilities are exploited, our nation’s critical infrastructures could be disrupted or disabled, possibly causing loss of life, physical damage, and economic losses. Although the majority of our nation’s critical infrastructures are owned by the private sector, the federal government owns and operates key facilities that use control systems, including oil, gas, water, energy, and nuclear facilities. Control systems are used within these infrastructures to monitor and control sensitive processes and physical functions. Typically, control systems collect sensor measurements and operational data from the field, process and display this information, and relay control commands to local or remote equipment. Control systems perform functions that range from simple to complex. They can be used to simply monitor processes—for example, the environmental conditions in a small office building—or to manage the complex activities of a municipal water system or a nuclear power plant. In the electric power industry, control systems can be used to manage and control the generation, transmission, and distribution of electric power. For example, control systems can open and close circuit breakers and set thresholds for preventive shutdowns. There are two primary types of control systems: distributed control systems and supervisory control and data acquisition (SCADA) systems. Distributed control systems typically are used within a single processing or generating plant or over a small geographic area and communicate using local area networks, while SCADA systems typically are used for large, geographically dispersed operations and rely on long-distance communication networks. In general, critical infrastructure sectors and industries depend on both types of control systems to fulfill their missions or conduct business. For example, a utility company that serves a large geographic area may use distributed control systems to manage power generation at each power plant and a SCADA system to manage power distribution to its customers. A SCADA system is generally composed of these six components (see fig. 1): (1) operating equipment, which includes pumps, valves, conveyors, and substation breakers; (2) instruments, which sense conditions such as pH, temperature, pressure, power level, and flow rate; (3) local processors, which communicate with the site’s instruments and operating equipment, collect instrument data, and identify alarm conditions; (4) short-range communication, which carries analog and discrete signals between the local processors and the instruments and operating equipment; (5) host computers, where a human operator can supervise the process, receive alarms, review data, and exercise control; and (6) long-range communication, which connects local processors and host computers using, for example, leased phone lines, satellite, and cellular packet data. A distributed control system is similar to a SCADA system but does not operate over a large geographic area or use long-range communications. We have previously reported that critical infrastructure control systems face increasing risks due to cyber threats, system vulnerabilities, and the potential impact of attacks as demonstrated by reported incidents. Cyber threats can be intentional or unintentional, targeted or nontargeted, and can come from a variety of sources. The Federal Bureau of Investigation has identified multiple sources of threats to our nation’s critical infrastructures, including foreign nation states engaged in information warfare, domestic criminals and hackers, and disgruntled employees working within an organization. Table 1 summarizes those groups or individuals that are considered to be key sources of threats to our nation’s infrastructures. Control systems are more vulnerable to cyber threats, including intentional attacks and unintended incidents, than in the past for several reasons, including their increasing standardization and their increased connectivity to other systems and the Internet. For example, in August 2006, two circulation pumps at Unit 3 of the Browns Ferry, Alabama, nuclear power plant operated by TVA failed, forcing the unit to be shut down manually. The failure of the pumps was traced to an unintended incident involving excessive traffic on the control system network caused by the failure of another control system device. Critical infrastructure owners face both technical and organizational challenges to securing control systems. Technical challenges—including control systems’ limited processing capabilities, real-time operations, and design constraints—hinder an infrastructure owner’s ability to implement traditional information technology (IT) security processes, such as strong user authentication and patch management. Organizational challenges include difficulty in developing a compelling business case for investing in control systems security and differing priorities of information security personnel and control systems engineers. To address the increasing threat to control systems governing critical infrastructures, both federal and private organizations have begun efforts to develop requirements, guidance, and best practices for securing control systems. For example, FISMA outlines a comprehensive, risk-based approach to securing federal information systems, which encompass control systems. Federal organizations, including the National Institute of Standards and Technology (NIST), the Federal Energy Regulatory Commission (FERC), and the Nuclear Regulatory Commission (NRC), have used a risk-based approach to develop guidance and standards to secure control systems. NIST guidance has been developed that currently applies to federal agencies; however, much FERC and NRC guidance and many standards have not been finalized. Once implemented, FERC and NRC standards will apply to both public and private organizations that operate covered critical infrastructures. We have previously reported on the importance of using a risk-based approach for securing critical infrastructures, including control systems. Risk management has received widespread support within and outside government as a tool that can help set priorities on how to protect critical infrastructures. While numerous and substantial gaps in security may exist, resources for closing these gaps are limited and must compete with other national priorities. Recognizing the importance of securing federal agencies’ information and systems, Congress enacted FISMA to strengthen the security of information and information systems within federal agencies, which include control systems. FISMA requires each agency to develop, document, and implement an agencywide information security program to provide security for the information and information systems that support the operations and assets of the agency, including those provided or managed by another agency, contractor, or other source. Specifically, this program is to include periodic assessments of the risk and magnitude of harm that could result from the unauthorized access, use, disclosure, disruption, modification, or destruction of information or information systems; risk-based policies and procedures that cost effectively reduce information security risks to an acceptable level and ensure that information security is addressed throughout the life cycle of each information system; subordinate plans for providing adequate information security for networks, facilities, and systems or groups of information systems; security awareness training for agency personnel, including contractors and other users of information systems that support the operations and assets of the agency; periodic testing and evaluation of the effectiveness of information security policies, procedures, and practices, performed with a frequency depending on risk, but no less than annually, and that includes testing of management, operational, and technical controls for every system identified in the agency’s required inventory of major information systems; a process for planning, implementing, evaluating, and documenting remedial action to address any deficiencies in the information security policies, procedures, and practices of the agency; procedures for detecting, reporting, and responding to security incidents; plans and procedures to ensure continuity of operations for information systems that support the operations and assets of the agency. Furthermore, FISMA established a requirement that each agency develop, maintain, and annually update an inventory of major information systems (including major national security systems) operated by the agency or under its control. This inventory is to include an identification of the interfaces between each system and all other systems or networks, including those not operated by or under the control of the agency. FISMA also directs NIST to develop standards and guidelines for systems other than national security systems. As required by FISMA and based on the objectives of providing appropriate levels of information security, NIST developed standards for all agencies to categorize their information and information systems according to a range of risk levels, guidelines recommending the types of information and information systems to be included in each category, and minimum information security requirements for information and information systems in each category. NIST standards and guidelines establish a risk management framework that instructs agencies on providing an acceptable level of information security for all agency operations and assets and that guides the testing and evaluation of information security control effectiveness within an agencywide information security program. Recognizing the importance of documenting standards and guidelines as part of an agencywide information security program, NIST emphasizes that agencies must develop and promulgate formal, documented policies and procedures in order to ensure the effective implementation of security requirements. NIST also collaborates with federal and industry stakeholders to develop standards, guidelines, checklists, and test methods to help secure federal information and information systems, including control systems. For example, NIST is currently developing guidance for federal agencies that own or operate control systems to comply with federal information system security standards and guidelines. The guidance identifies issues and modifications to consider in applying information security standards and guidelines to control systems. In December 2007, NIST released an augmentation to Special Publication (SP) 800-53, Recommended Security Controls for Federal Information Systems, which provides a security control framework for control systems. According to NIST officials, while most controls in SP 800-53 are applicable to control systems as written, several controls do require supplemental guidance and enhancements. Under the Energy Policy Act of 2005, FERC was authorized to (1) appoint an electricity reliability organization to develop and enforce mandatory electricity reliability standards, including cyber security, and (2) approve or remand each proposed standard. The commission may also direct the reliability organization to develop a new standard or modify approved standards. Both the commission and the reliability organization have the authority to enforce approved standards, investigate incidents, and impose penalties (up to $1 million a day) on noncompliant electricity asset owners or operators. FERC has conducted several activities to begin implementing the requirements of the act. In July 2006, FERC certified the North American Electric Reliability Corporation (NERC) as the national electric reliability organization. In August 2003, prior to passage of the Energy Policy Act of 2005, NERC adopted Urgent Action 1200, a temporary, voluntary cyber security standard for the electric industry. Urgent Action 1200 directed electricity transmission and generation owners and operators to develop a cyber security policy, identify critical cyber assets, and establish controls for and monitor electronic and physical access to critical cyber assets. Urgent Action 1200 remained in effect on a voluntary basis until June 1, 2006, at which time NERC proposed eight critical infrastructure protection reliability standards to replace the Urgent Action 1200 standard. In July 2007, FERC issued a notice of proposed rulemaking in which it proposed to approve eight critical infrastructure reliability standards, which included standards for control systems security. FERC also proposed to direct NERC to modify the areas of these standards that required improvement. In January 2008, after considering public comments on the notice of proposed rulemaking, FERC approved the reliability standards and the accompanying implementation plan. It also directed NERC to develop modifications to strengthen the standards and to monitor the development and implementation of the NIST standards to determine if they contain provisions that will protect the bulk-power system better than NERC’s reliability standards. The organizations subject to the standards, including utilities like TVA, must be auditably compliant with the standards by 2010. The NRC, which has regulatory authority over nuclear power plant safety and security, has conducted several activities related to enhancing the cyber security of control systems. In 2005, an industry task force led by the Nuclear Energy Institute (NEI) developed and released the Cyber Security Program for Power Reactors (NEI 04-04) to provide nuclear power reactor licensees a means for developing and maintaining effective cyber security programs at their sites. In December 2005, the commission staff accepted the method outlined in NEI 04-04 for establishing and maintaining cyber security programs at nuclear power plants. TVA officials stated that the agency has begun a program to comply with NEI 04-04 guidelines and plans to complete implementation of corrective actions identified as a result of these guidelines over the next 3 years, consistent with planned plant outages and upgrade projects. In January 2006, the commission issued a revision to Regulatory Guide 1.152, Criteria for Use of Computers in Safety Systems of Nuclear Power Plants, which provides cyber security-related guidance for the design of nuclear power plant safety systems. In April 2007, the commission finalized a rule that added “external cyber attack” to the events that power reactor licensees are required to prepare to defend against. In addition, the commission initiated a rulemaking process that provides cyber security requirements for digital computer and communication networks, including systems that are needed for plant safety, security, or emergency response. The public comment period for this rulemaking closed in March 2007. Commission officials stated that they estimate this rulemaking process will be completed in early 2009. Once the rulemaking process is completed and requirements for nuclear power plant cyber security programs are finalized, the commission is planning to conduct a range of oversight activities, including inspections at power plants. According to commission officials, all nuclear plant operators have committed to complete implementation of the NEI-04-04 program at their sites. The TVA is a federal corporation and the nation’s largest public power company. Its mission is to supply affordable, reliable power, support a thriving river system, and stimulate sustainable economic development in the public interest. In addition to generating and transmitting power, TVA also manages the nation’s fifth-largest river system to minimize flood risk, maintain navigation, provide recreational opportunities, and protect water quality. TVA is governed by a nine-member Board of Directors that is led by the Chairman. Each board member is nominated by the President of the United States and confirmed by the Senate. The TVA Chief Executive Officer reports to the TVA Board of Directors. TVA’s power service area covers 80,000 square miles in the southeastern United States, an area that includes almost all of Tennessee and parts of Mississippi, Kentucky, Alabama, Georgia, North Carolina, and Virginia, and has a total population of about 8.7 million people (see fig. 2). TVA operates 11 coal-fired fossil plants, 8 combustion turbine plants, 3 nuclear plants, and a hydroelectric system that includes 29 hydroelectric dams and one pumped storage facility (see fig. 2 and fig. 3). Fossil plants produce about 60 percent of TVA’s power, nuclear plants about 30 percent, and the hydroelectric system about 10 percent. TVA also owns and operates one of the largest transmission systems in North America. TVA’s transmission system moves electric power from the generating plants where it is produced to distributors of TVA power and to industrial and federal customers across the region. TVA provides power to three main customer groups: distributors, directly served customers, and off-system customers. There are 159 distributors— 109 municipal utility companies and 50 cooperatives—that resell TVA power to consumers. These groups represent the base of TVA’s business, accounting for 85 percent of their total revenue. Fifty-three large industrial customers and six federal installations buy TVA power directly. They represent 11 percent of TVA’s total revenue. Twelve surrounding utilities buy power from TVA on the interchange market. Sales to these utilities represent 4 percent of TVA’s total revenue. Control systems are essential to TVA’s operation. TVA uses control systems to both generate and deliver power. In generation, control systems are used within power plants to open and close valves, control equipment, monitor sensors, and ensure the safe and efficient operation of a generating unit. Many control systems networks connect with TVA’s corporate network to transmit information about system status. To deliver power, TVA monitors the status of its own and surrounding transmission facilities from two operations centers. Each center is staffed 24 hours a day and can serve as a backup for the other center. Control systems at these centers are used to open and close breakers and balance the transmission of power across the TVA network while accounting for changes in network capacity due to outages and changes in demand that occur continuously throughout the day. TVA’s control systems range in capacity from simple systems with limited functionality located in one facility to complex, geographically dispersed systems with multiple functions. The ages of these control systems range from modern systems to systems dating back 20 or more years to the original construction of a facility. As shown in table 2, TVA has designated certain senior managers to serve the key roles in information security designated by FISMA. Responsibility for control systems security is distributed throughout TVA (see fig. 4). TVA’s Information Services organization provides general guidance, assistance in FISMA compliance, and technical assistance in control systems security. The Information Services organization also manages the overall TVA corporate computer network that links facilities throughout the TVA service area and is connected to the Internet. As of February 2008, the Enterprise IT Security organization within Information Services was given specific responsibility for cyber security throughout the agency. However, the control systems located within a plant are integrated with and managed as part of the generation equipment, safety and environmental systems, and other physical equipment located at that plant. This means that development, day-to-day maintenance and operation, and upgrades of control systems are handled by the business units responsible for the facilities where the systems are located. Specifically, nuclear systems are managed by the Nuclear Power Group; coal and combustion turbine control systems are managed by the Fossil Power Group; and hydroelectric facilities are managed by River Operations. Transmission control systems are managed by TVA’s Transmission and Reliability Organization, located within its Power Systems Operations business unit. The Transmission and Reliability Organization is highly dependent on control systems. To comply with NERC Urgent Action 1200, and in an effort to ensure its systems are secure, the Transmission and Reliability Organization has handled additional aspects of information security compared with other TVA organizations. For example, the organization manages portions of its own network infrastructure. It also has arranged for both internal and external security assessments in order to enhance the security of its control systems. TVA had not fully implemented appropriate security practices to secure the control systems used to operate its critical infrastructures. Both the corporate network infrastructure and control systems networks and devices at individual facilities and plants were vulnerable to disruption. In addition, physical security controls at multiple locations did not sufficiently protect critical control systems. The interconnections between TVA’s control system networks and its corporate network increase the risk that security weaknesses on the corporate network could affect control systems networks. For example, because of weaknesses in the separation of lower security network segments from higher security network segments on TVA networks, an attacker who gained access to a less secure portion of a network such as the corporate network could potentially compromise equipment in a more secure portion of the network, including equipment that has access to control systems. As a result, TVA’s control systems that operate its critical infrastructures are at increased risk of unauthorized modification or disruption by both internal and external threats. The TVA corporate network infrastructure had multiple weaknesses that left it vulnerable to intentional or unintentional compromise of the confidentiality, integrity, and availability of the network and devices on the network. These weaknesses applied both at TVA headquarters and to the portions of the corporate network located at the individual facilities we reviewed. For example, one remote access system used for the network that we reviewed was not securely configured. Further, individual servers and workstations lacked key patches and were insecurely configured. In addition, the configuration of numerous network infrastructure protocols and devices provided limited or ineffective security protections. Moreover, the intrusion detection system that TVA used had significant limitations. As a result, TVA’s control systems were at an increased risk of unauthorized access or disruption via access from the corporate network. Furthermore, weaknesses in the intrusion detection system could limit the ability of TVA to detect malicious or unintended events on its network. Remote access is any access to an organizational information system by a user (or an information system) that communicates through an external, nonorganization-controlled network (e.g., the Internet). NIST guidance states that information systems should establish a trusted communications path between remote users and an information system and that two-factor authentication should be part of an organization’s remote access authentication requirements. Additionally, TVA policy requires that if remote access technology is used to connect to the network, it must be configured securely. One device used for remote access is a virtual private network (VPN). TVA did not configure a VPN system to include effective security mechanisms. This could allow an attacker who compromised a remote user’s computer to remotely access the user’s secure session to TVA, thereby increasing the risk that unauthorized users could gain access to TVA systems and sensitive information. Federal and agency guidance call for effective patch management, firewall configuration, and application security settings. TVA has a patch management policy that requires it to regularly monitor, identify, and remediate vulnerabilities to applications in its software inventory. NIST guidance also states that firewalls should be carefully configured to provide adequate protection. Furthermore, NIST guidance states that organizations should effectively configure security settings in key applications to the highest level possible. However, almost all of the workstations and servers that we examined on the corporate network lacked key security patches or had inadequate security settings. Furthermore, TVA did not effectively implement host firewall controls on its laptops. In addition, inadequate security settings existed in key applications installed on laptops, servers, and workstations we examined. Consequently, TVA is at an increased risk that known vulnerabilities in these applications could allow an attacker to execute malicious code and gain control of or compromise a system. Federal and agency guidance state that organizations should have strong passwords, identification and authentication, and network segmentation. National Security Agency guidance states that Windows passwords should be 12 or more characters long, include upper and lower case letters, numbers, and special characters, and not consist of dictionary words and has advised against the use of weak encryption. NIST guidance states that systems should uniquely identify and authenticate users with passwords or other authentication mechanisms or implement other compensating controls. NIST guidance also states that organizations should take steps to secure their e-mail systems. Finally, NIST guidance states that organizations should partition networks containing higher risk systems from lower risk systems and configure interfaces between those systems to manage risk. However, the TVA corporate network used several protocols and devices that did not provide sufficient security controls. For example, certain network protocols and devices were not adequately protected by password or authentication controls or encryption. In addition, TVA had network services that spanned different security network segments. As a result, a malicious user could exploit these weaknesses to gain access to sensitive systems or to otherwise modify or disrupt network traffic. Even strong controls may not block all intrusions and misuse, but organizations can reduce the risks associated with such events if they take steps to promptly detect, report, and respond to them before significant damage is done. In addition, analyzing security events allows organizations to gain a better understanding of the threats to their information and the costs of their security-related problems. Such analyses can pinpoint vulnerabilities that need to be eliminated so that they will not be exploited again. NIST states that intrusion detection is the process of monitoring events occurring in a computer system or network and analyzing the events for signs of intrusion, which it defines as an attempt to compromise the confidentiality, integrity, or availability of a computer or network. NIST guidance prescribes network and host-based intrusion detection systems as a means of protecting systems from the threats that come with increasing network connectivity. TVA had limited ability to effectively monitor its network with its intrusion detection system. Although a network intrusion detection system was deployed by TVA to monitor network traffic, it could not effectively monitor key computer assets. As a result, there is an increased risk that unauthorized access to TVA’s networks may not be detected and mitigated in a timely manner. TVA’s control system networks and devices on these networks were vulnerable to disruption due to inadequate information security controls. Specifically, firewalls were either bypassed or inadequately configured, passwords were either weak or not used at all, logging of certain activity was limited, configuration management policies for control systems software were not consistently implemented, and servers and workstations lacked key patches and effective virus protection. The combination of these weaknesses with the weaknesses in the TVA corporate network identified in the previous section places TVA’s control systems that operate its critical infrastructures at increased risk of unauthorized modification or disruption by both internal and external threats. A firewall is a hardware or software component that protects given computers or networks from attacks by blocking network traffic. NIST guidance states that firewalls should be configured to provide adequate protection for the organization’s networks and that the transmitted information between interconnected systems should be controlled and regulated. TVA had implemented firewalls to segment control systems networks from the corporate network at all facilities we reviewed with connections between these two networks. However, firewalls at three of six facilities reviewed were either bypassed or inadequately configured. As a result, the hosts on higher security control system networks were at increased risk of compromise or disruption from the other lower security networks. Passwords are used to establish the validity of a user’s claimed identity by requesting some kind of information that is known only by the user—a process known as authentication. The combination of identification, using, for example, a unique user account, and authentication, using, for example, a password, provides the basis for establishing individual accountability and for controlling access to the system. In cases where passwords cannot be implemented because of technological limitations or other concerns, such as impact on emergency response, NIST states that an organization should document controls that have been put in place to compensate for this weakness. TVA policy requires authentication of users except where security requirements or limitations in the hardware or software preclude it. In addition, agency policy requires users to establish complex passwords. TVA did not have effective passwords or other documented compensating controls governing control systems we reviewed. According to agency officials, in certain cases, passwords were not technologically possible to implement but in these cases, there were no documented compensating controls. Until the agency implements either effective password practices or documented compensating controls, it faces an increased risk of unauthorized access to its control systems. Determining what, when, and by whom specific actions are taken on a system is crucial to establishing individual accountability, monitoring compliance with security policies, and investigating security violations. Audit and monitoring involves the regular collection, review, and analysis of auditable events for indications of inappropriate or unusual activity and the appropriate investigation and reporting of such activity. Audit and monitoring can help security professionals routinely assess computer security, perform investigations during and after an attack, and even recognize an ongoing attack. Federal guidance states that organizations should develop formal audit policies and procedures. TVA guidance states that sufficient audit logs should be maintained that allow monitoring of key user activities. While TVA had taken steps to establish audit logs for its transmission control centers, it had not established effective audit logs or compensating controls at other facilities we reviewed. According to agency officials, system limitations at these facilities have historically meant that multiple users shared a single account to access these control systems. Therefore, audit logs would not have served a useful purpose because activities could not be traced to a single user. Until TVA establishes detailed audit logs for its control systems at these facilities or compensating controls in cases where such logs are not feasible, it risks being unable to determine if malicious incidents are occurring and, after an event occurs, being able to determine who or what caused the incident. Federal guidance states that all applications and changes to those applications should go through a formal, documented process that identifies all changes to the baseline configuration. Also, procedures should ensure that no unauthorized software is installed. TVA has established configuration management policies and procedures for its information technology systems. Specifically, its policies define the roles and responsibilities of application owners and developers; require business units to implement procedural controls that define documentation and testing required for software changes; and establish procedures to ensure that all changes relating to infrastructure and applications be managed and controlled. However, TVA did not consistently apply its configuration management policies and procedures to control systems. The transmission control system had a configuration management process, and the hardware at individual plants was governed by a configuration management process, including plant drawings that tracked individual pieces of equipment. However, there was no formal configuration management process for software that was part of the control systems at the hydroelectric and fossil facilities that we reviewed. As a result, increased risk exists that unapproved changes to control systems could be made. Patch management, including up-to-date patch installation, helps to mitigate vulnerabilities associated with flaws in software code, which could be exploited to cause significant damage. According to NIST, agencies should identify, report, and correct their information system flaws. According to NIST, tracking patches allows organizations to identify which patches are installed on a system and provides confirmation that the appropriate patches have been applied. Moreover, TVA policy requires the agency to remediate these vulnerabilities in a timely manner. TVA had not installed current versions of patches for key applications on computers on control systems networks. While TVA had an agencywide policy and procedure for patch management, these policies did not apply to individual plant-level control systems. According to the operators at two of the facilities we reviewed, they applied vendor-approved patches to control systems but did not track versions of patches on these machines. Failure to keep software patches up-to-date could allow unauthorized individuals to gain access to network resources or disrupt network operations. Virus and worm protection for information systems is a serious challenge. Computer attack tools and techniques are becoming increasingly sophisticated; viruses are spreading faster as a result of the increasing connectivity of today’s networks; commercial off-the-shelf products can be easily exploited for attack by their users; and there is no single solution such as firewalls or encryption to protect systems. To combat viruses and worms specifically, entities should keep antivirus programs up-to-date. According to NIST, agencies should implement malicious code protection that includes a capability for automatic updates so that virus definitions are kept up-to-date on servers, workstations, and mobile computing devices. Virus-scanning software should be provided at critical entry points, such as remote-access servers, and at each desktop system on the network. Although TVA implemented antivirus software on its transmission control systems network, it did not consistently implement antivirus software on other control systems we reviewed. In one case, according to agency officials, the vendor that developed the control systems software would not support an antivirus application, and the agency did not have plans to require the vendor to address this weakness. In another case, antivirus software was implemented, but it was not up-to-date. In the event that using antivirus software is infeasible on a control system, the agency must document the controls, such as training or physical security, that would compensate for this deficiency. TVA had not done this. According to agency officials, such documentation is under way for its hydroelectric facilities, but not for other facilities. As a result, there is increased risk that the integrity of these networks and devices could be compromised. Physical security controls are important for protecting computer facilities and resources from espionage, sabotage, damage, and theft. These controls restrict physical access to computer resources, usually by limiting access to the buildings and rooms in which the resources are housed and by periodically reviewing the access granted in order to ensure that access continues to be appropriate. TVA policy requires that appropriate physical and environmental controls be implemented to provide security commensurate with the level of risk and magnitude of harm that would result from loss, misuse, unauthorized access, or modification of information or information systems. Further, NIST policy requires that federal organizations implement a variety of physical security controls to protect information and industrial control systems and the facilities in which they are located. TVA had taken steps to provide physical security for its control systems. For example, it had issued electronic badges to agency personnel and contractors to help control access to many of its sensitive and restricted areas. TVA had also established law enforcement liaisons that help ensure additional backup security and facilitate the accurate flow of timely security information between appropriate government agencies. In addition, the agency had implemented physical security training for its employees to help achieve greater security awareness and accountability. However, the agency had not effectively implemented physical security controls at various locations, as the following examples illustrate: Live network jacks connected to TVA’s internal network at certain facilities we reviewed had not been adequately secured from access by the public. TVA did not adequately control or change its keys to industrial control rooms containing sensitive equipment at one facility we reviewed. For example, the agency could neither account for all keys issued at the facility, which relies on manual locks for the security of rooms containing sensitive computer and control equipment, nor could it determine when keys had last been changed. TVA did not have an effective visitor control program at one facility we reviewed. For example, the agency had not maintained a visitor log describing visitors’ names, organizations, purpose of visits, forms of identification, or the names of the persons visited. Physical security policies and plans were either in draft form or were nonexistent. Rooms containing sensitive IT equipment had not been adequately environmentally protected. For example, sufficient emergency lighting was not available outside the control room at one facility we reviewed, a server room at the facility had no smoke detection capability, a control room at the facility contained a kitchen (a potential fire and water hazard), and a communications room had batteries collocated with sensitive communications gear. TVA had not always ensured that access to sensitive computing and industrial control systems resources had been granted to only those who needed it to perform their jobs at one facility we reviewed. About 75 percent of those who were issued facility badges had access to a facility computer room, but the vast majority of these badgeholders did not need access to the room. While TVA officials stated that all of those with access had been through the background investigation and training process required for all employees at the facility, an underlying principle for secure computer systems and data recommended by NIST is that users should be granted only those access rights and permissions needed to perform their official duties. As a consequence of weaknesses such as these, increased risk exists that sensitive computing resources and data could be inadvertently or deliberately misused or destroyed. Federal guidance and best practices in information security call for the use of multiple layers of defense to secure information resources. These multiple layers include the use of protection mechanisms and key network control points such as firewalls, routers, and intrusion detection systems to segment and control access to networks. Higher risk networks and devices, such as critical infrastructure control systems, may require additional security controls and should be on networks that are separate from lower risk devices. TVA had deployed a layered defense model to control access between and among the corporate and control systems networks. For example, in all cases we examined, control systems were located on networks that had been segmented from business computing resources. The agency had also deployed protection mechanisms such as firewalls, router access control lists, virtual local area networking, and physical security controls at multiple locations throughout its network. For example, TVA’s transmission control organization used layered networks with increasing levels of security to separate critical control devices from the corporate network. However, these mechanisms and information security controls had been inconsistently applied. As a result, the effectiveness of the multiple layers of defense was limited. For example, while the transmission control organization network restricted access to control systems using multiple firewalls at outer and inner network boundaries, some plant systems had significantly fewer levels of security to reach control systems that impacted the same facilities. In addition, specific weaknesses in security configurations on key systems further reduced the overall effectiveness of security controls. The cumulative effect of these individual weaknesses and the interconnectedness of TVA critical infrastructure control systems places these systems at risk of compromise or disruption from internal and external threats. An underlying reason for TVA’s information security control weaknesses is that it had not consistently implemented significant elements of its information security program. The effective implementation of an information security program includes implementing the key elements required under FISMA and the establishment of a continuing cycle of activity—which includes developing an inventory of systems, assessing risk, developing policies and procedures, developing security plans, testing and monitoring the effectiveness of controls, identifying and tracking remedial actions, and establishing appropriate training. TVA had not consistently implemented key elements of these activities. As a result of not fully developing and implementing its information security program, an increased potential for disruption or compromise of its control systems exists. FISMA requires that each agency develop, maintain, and annually update an inventory of major information systems operated by the agency or that are under its control. A complete and accurate inventory of major information systems is a key element of managing the agency’s information technology resources, including the security of those resources. The inventory can be used to track agency systems for purposes such as periodic security testing and evaluation, patch management, contingency planning, and identifying system interconnections. TVA requires that the senior agency information security officer maintain an authoritative inventory of general support systems, major applications, major information systems, and minor applications. TVA did not have a complete and accurate inventory of its control systems. In its fiscal year 2007 FISMA submission, TVA included in its inventory of major applications the transmission and the hydro automation control systems. Although TVA stated that the plant control systems at its nuclear and fossil facilities were minor applications, these applications had not been included in TVA’s inventory of minor applications or accounted for as part of a consolidated general support system. These systems are essential to automated operation of generation facilities. At the conclusion of our review, agency officials stated they had developed a plan to develop a more complete and accurate system inventory by September 2008. Until TVA has a complete and accurate inventory of its control systems, it cannot ensure that the appropriate security controls have been implemented to protect these systems. FISMA mandates that agencies assess the risk and magnitude of harm that could result from the unauthorized access, use, disclosure disruption, modification, or destruction of their information and information systems. The Federal Information Processing Standard (FIPS) 199, Standards for Security Categorization of Federal Information and Information Systems, and related NIST guidance provide a common framework for categorizing systems according to risk. The framework establishes three levels of potential impact on organizational operation, assets, or individuals should a breach of security occur—high (severe or catastrophic), moderate (serious), and low (limited)—and it is used to determine the impact for each of the FISMA-specified security objectives of confidentiality, integrity, and availability. Once determined, security categories are to be used in conjunction with vulnerability and threat information in determining minimum security requirements for the system and in assessing the risk to an organization. Risk assessments help ensure that the greatest risks have been identified and addressed, increase the understanding of risk, and provide support for needed controls. Office of Management and Budget (OMB) Circular A-130, appendix III, prescribes that risk be assessed when significant changes are made to major systems and applications in an agency’s inventory or at least every 3 years. Consistent with NIST guidance, TVA policy states that risk assessments should be updated to reflect the results of security tests and evaluations. TVA had not completed assigning risk levels or assessing the risk of its control systems. While TVA categorized the transmission and hydro automation control systems as high-impact systems using FIPS 199, its nuclear division and fossil business unit, which include its coal and combustion turbine facilities, had not assigned risk levels to their control systems. Further, although TVA had performed a risk assessment for the transmission control system, the risk assessment did not include the risks associated with the newly identified vulnerabilities identified during the latest security test and evaluation. TVA had not completed risk assessments for the control systems at their nuclear, hydroelectric, coal, and combustion turbine facilities. According to TVA officials, the agency plans to complete risk assessments by May 2008 at the nuclear facility and June 2008 at the hydroelectric facility. For the fossil facility and all remaining control systems throughout TVA, agency officials stated that they would complete the security categorization of these systems by the end of September 2008. However, no date has been set for completion of risk assessments. Without assigned risk levels, TVA cannot make risk- based decisions on the security needs of their information and information systems. Moreover, until TVA assesses the risks of all its control systems, the agency cannot be assured that its control systems apply the appropriate level of controls to help prevent their unauthorized access, use, disclosure, disruption, modification, or destruction. A key task in developing, documenting, and implementing an effective information security program is to establish and implement risk-based policies, procedures, and technical standards that cover security over an agency’s computing environment. If properly implemented, policies and procedures can help to reduce the risk that could come from unauthorized access or disruption of services. Because security policies are the primary mechanism by which management communicates its views and requirements, it is important to document and implement them. Several shortcomings existed in TVA’s information security policies. First, the agency had not consistently applied information security policies to its control systems. Second, business unit security policies were not always consistent with overall agency information security policies. Third, cyber security responsibilities for interfaces between TVA’s transmission control system and its fossil and hydroelectric generation units had not been documented. Fourth, TVA’s patch management process was not in compliance with federal guidance. Finally, physical security standards for control system sites were in draft. TVA had developed and documented policies, standards, and guidelines for information security; however, it had not consistently applied these policies to its control systems. Although neither FISMA nor TVA’s agencywide IT security policy explicitly mentions control systems, our analysis of NIST guidance and the stated position of NIST officials is that the guidance does apply to industrial control systems, such as the systems that TVA uses to operate critical infrastructures. Furthermore, NIST has recently developed and released guidance to assist agencies in applying federal IT security requirements to control systems. As a result of not applying this guidance with the same level of rigor to its control systems, numerous shortfalls existed in TVA’s information security management program for its control systems, including outdated risk assessments; incomplete system security categorizations, system security plans, and testing and evaluation activities; and an ineffective remediation process. TVA officials stated that they are in the process of applying current NIST criteria to their control systems and plan to complete this process by the end of fiscal year 2008. Until TVA consistently applies federal IT security policies to its control systems and addresses identified weaknesses, its control systems will remain at risk of compromise and disruption. While two TVA business units had developed IT security policies to address anticipated cyber security guidance from their respective industries, these policies were not always consistent with agencywide IT security policy. According to TVA policy, business units may establish their own IT security policies but must still comply with agencywide IT security policy. For example, TVA’s Nuclear Power Group had developed a cyber security policy and the Power Systems Operations business unit had developed two cyber security policies—one business unit policy that was in draft, and one approved policy developed by and applicable to the unit’s Transmission and Reliability Organization. These policies addressed many of the same issues as TVA’s agencywide IT security policy, including establishing roles and responsibilities, access controls, configuration management, training, and emergency planning and response. However, the policies were not always consistent with the agencywide IT security policy. For example, although both the Nuclear Power Group and the Transmission and Reliability Organization policies had been developed to establish requirements for cyber security of plant systems, neither policy directed system security officers to implement minimum baseline security controls to protect the confidentiality, integrity, and availability of these systems, as is required by agency policy, nor did they establish a link or reference to agencywide IT security policy or federal IT security requirements. Although the Power System Operations cyber security policy reiterated requirements outlined by FISMA and the TVA IT security policy, this policy remained in draft. The existence of inconsistent policies at different levels of TVA could hinder its ability to apply IT security requirements consistently across the agency. Without developing and implementing consistent policies, procedures, and standards across all agency divisions and groups, TVA has less assurance that its systems controlling critical infrastructure are protected from unauthorized access and cyber threats. NIST guidance states that organizations should authorize all connections from an information system to another information system through the use of system connection agreements. Documentation should include security roles and responsibilities and any service level agreements, which should define the expectations of performance for each required security control, and remedy and response requirements for any identified instance of noncompliance. The agreements established by TVA’s Transmission and Reliability Organization with other TVA business units did not fully address information that should be included based on NIST guidance. For example, the control systems operated by the Transmission and Reliability Organization interface with power plant control systems operated by TVA’s fossil and hydroelectric business units. Although the transmission organization had established agreements with the fossil and hydroelectric business units, these agreements made no mention of cyber security roles and responsibilities, performance expectations for security controls, and remedy and response requirements for noncompliance. TVA officials stated that the type of interface between the transmission control system and individual plant systems means that, in most cases, a cyber security incident on a plant control network would not impact the overall transmission control network. While the likelihood of direct transmission of malware such as a virus might be small, without clear documentation of information required in an intergroup agreement, TVA faces the risk that security controls may not be in place or work as intended at an individual plant, resulting in a situation where critical generation equipment may not be able to start, safely shut down, or otherwise be controlled by the transmission control system when necessary. This is particularly of concern because of the variation in cyber security controls that we observed between the overall transmission control system and the individual plants. Without clear documentation of cyber security-related roles and responsibilities, TVA faces the risk that security controls may not be in place or work as intended. NIST guidance states that federal agencies should create a comprehensive patch management process. The process should include monitoring of security sources for vulnerability announcements; an accurate inventory of the organization’s IT resources, using commercially available automated inventory management tools whenever possible; prioritization of the order in which the vulnerabilities are addressed with a focus on high-priority systems such as those essential for mission-critical operations; and automated deployment of patches to IT devices using enterprise patch management tools. TVA had not fully implemented such a comprehensive process. It had a patch management process, including staff whose primary responsibility is to monitor security sources for vulnerability announcements. However, the agency lacked an accurate inventory of its IT resources produced using an automated management tool. For example, agency staff did not have timely access to version numbers and build numbers of software applications in the agency, although officials stated this information could be obtained manually. In addition, the agency’s patch management policy did not apply to individual plant-level control systems or network infrastructure devices such as routers and switches. Furthermore, TVA’s written guidance on patch management provided only limited guidance on how to prioritize vulnerabilities. For example, the guidance did not refer to the criticality of IT resources. In addition, as previously noted, the agency had not categorized the impact of many of its control systems. The guidance also did not specify situations for which it was acceptable to upgrade or downgrade a vulnerability’s priority from that given by industry standard sources such as the vendor or third-party patch tracking services. As a result, patches that were identified as critical, meaning they should be applied immediately to vulnerable systems, were not applied in a timely manner. For example, agency staff had reduced the priority of three vulnerabilities identified as critical or important by the vendor or a patch tracking service and did not provide sufficient documentation of the basis for this decision. TVA also did not document many vulnerabilities on its systems. For a 15-month period, TVA documented its analysis of 351 reported vulnerabilities, while NIST’s National Vulnerability Database reported about 2,000 vulnerabilities rated as high or medium-risk for the types of systems in operation at TVA for the same time period. Finally, the agency lacked an automated tool to assess the deployment of many types of application patches. As a result, certain systems were missing patches more than 6 months past TVA deadlines for patching. Without a fully effective patch management process, TVA faces an increased risk that critical systems may remain vulnerable to known vulnerabilities and be open to compromise or disruption. NIST guidance states that organizations should develop formal documented physical security policies and procedures to facilitate the implementation of physical and environmental protection controls. However, TVA’s physical security standards for protection of its assets, including sensitive computer and industrial control equipment, as well as employees, contractors, visitors, and the general public, had been drafted but not approved by management. These standards are intended to provide clear and consistent physical security policy for all nonnuclear facilities. According to TVA Police officials, most sites budget for and implement their own physical security guidance and measures. Finalized physical security standards agencywide would provide consistent guidelines for facilities to make risk-based decisions on implementing these recommendations. Consequently, TVA has less assurance that control systems will be consistently and effectively protected from inadvertent or deliberate misuse including damage or destruction. The objective of system security planning is to improve the protection of IT resources. A system security plan provides a complete and up-to-date overview of the system’s security requirements and describes the controls that are in place—or planned—to meet those requirements. FISMA requires that agency information security programs include subordinate plans for providing adequate information security for networks, facilities, and systems or groups of information systems, as appropriate. OMB Circular A-130 specifies that agencies develop and implement system security plans for major applications and for general support systems and that these plans address policies and procedures for providing management, operational, and technical controls. NIST guidance states that minor applications that are not connected to a general support system or major application should be described in a general support system plan that has either a common physical location or is supported by the same organization. Further, TVA policy states that minor applications should be briefly described in a general support system security plan. NIST guidance states that security plans should contain key information needed to select the appropriate security controls, such as the FIPS 199 category and the certification and accreditation status of the connected systems. Plans should also be updated to include the latest security test and evaluation and risk assessment results. TVA had only developed a system security plan that covered two of the six facilities we reviewed, and this plan was incomplete and not up-to-date. The transmission control system security plan, which addressed systems at two transmission control centers, included many elements required by NIST, such as the description of the individuals responsible for security, and addressed management, operational, and technical controls. Although the plan listed interconnected systems, it did not completely address interconnectivity with other systems operated by other organizations. Specifically, it did not include essential information needed to select the appropriate security controls, such as the FIPS 199 category or the certification and accreditation status of the connected systems. Further, the plan was not updated to include the latest security test and evaluation or risk assessment results. According to agency officials, TVA is developing a system security plan for its hydroelectric automation control system as part of its certification and accreditation process. Agency officials stated that this plan will be completed by June 2008. TVA nuclear and fossil facilities had not developed security plans for their control systems. Agency officials stated that they were planning to develop security plans and complete the certification and accreditation process for these control systems. The plan for the nuclear facility is scheduled to be completed by June 2008. For the fossil facility, TVA officials stated that they intend to complete a security plan and certification and accreditation activities based on the results of security categorizations that will be completed by September 2008. However, no time frame has been set for completion of the plan or accreditation. Until these activities are completed, TVA cannot ensure that the security requirements have been identified and that the appropriate controls will be in place to protect these critical control systems. FISMA mandates that federal employees and contractors who use agency information systems be provided with periodic training in information security awareness. FISMA also requires agencies to provide appropriate training on information security to personnel who have significant security responsibilities. This training, described in NIST guidance, should inform personnel, including contractors and other users of information systems supporting the operations and assets of an agency, of information security risks associated with their activities and their roles and responsibilities to properly and effectively implement the practices that are designed to reduce these risks. Depending on an employee’s specific security role, training could include specialized topics such as incident detection and response, physical security, or firewall configuration. TVA also has a policy that requires that all employees and others who have access to its corporate network to complete annual security awareness training. The policy requires that employees and contractors who do not complete the training within a set time frame have their network access suspended. Although for fiscal year 2007 TVA reported that 98 percent of its employees and contractors completed its annual security awareness training, other shortfalls existed in TVA’s training program. For example, the agency policy of suspending network access for employees who did not complete security awareness training did not apply to control system- specific networks, such as those at the nuclear, hydroelectric, and fossil facilities we reviewed. At these sites, there were no controls in place to enforce completion of the required training by employees using these control systems. In addition, a substantial number of TVA employees who have significant security responsibilities did not complete role-based training in the last fiscal year, and the required training did not include specialized technical topics. In fiscal year 2007, TVA reported that only 25 percent of 197 applicable employees who had significant IT security responsibilities had completed role-based training, compared with 86 percent and 72 percent who reportedly received such training in fiscal years 2005 and 2006, respectively. According to agency officials, training had not been completed primarily due to a lack of staff to provide the training. Furthermore, the role-based training that was required was focused on management and procedural issues. TVA had technical security training available to its information security staff, which comprised approximately 14 of the 197 employees who needed role-based training, but this training was not required. For these 14 staff, TVA reported a 100 percent completion rate for the technical training. At the end of our review, agency officials provided a plan to improve the number of employees completing role-based training and to examine adding technical training to training requirements. The plan is to be completed by July 2008. Until this plan is fully implemented, security lapses are more likely to occur and could contribute to information security weaknesses at TVA. A key element of an information security program is ongoing testing and evaluation to ensure that systems are in compliance with policies and that the policies and controls are both appropriate and effective. Testing and evaluation demonstrates management’s commitment to the security program, reminds employees of their roles and responsibilities, and identifies areas of noncompliance and ineffectiveness requiring remediation. Starting in fiscal year 2007, OMB required agencies to discontinue using SP 800-26 and to use NIST SP 800-53A for the assessment of security controls effectiveness when performing periodic security testing and evaluation of their information systems. In addition, TVA policy requires all minor applications to be assigned to a general support system or major application that is tested and evaluated as part of the certification and accreditation process performed every 3 years. TVA did not properly test and evaluate all of its control systems. Although TVA had performed annual self-assessments of the two control systems designated as major applications (transmission and hydro automation control systems) in fiscal year 2007, it did so using outdated NIST guidance contained in SP 800-26, rather than the current guidance in SP 800-53A. Of these two control systems, TVA performed a complete test and evaluation of the security controls on one of the systems—the transmission control system—within the last 3 years. Although TVA officials at the nuclear and fossil facilities considered their plant-level control systems to be minor applications, they were not part of any general support system. As a result, TVA did not appropriately identify, test, or evaluate the effectiveness of the security controls in place for the control systems at these facilities. Without appropriate tests and evaluations of all its control systems, the agency has limited assurance that policies and controls are appropriate and working as intended. Additionally, increased risk exists that undetected vulnerabilities could be exploited to allow unauthorized access to these critical systems. A remedial action plan is a key component described in FISMA. Such a plan assists agencies in identifying, assessing, prioritizing, and monitoring progress in correcting security weaknesses that are found in information systems. In its annual FISMA guidance to agencies, OMB requires agencies’ remedial action plans, also known as plans of action and milestones, to include, at a minimum, the resources necessary to correct an identified weakness, the original scheduled completion date, the status of the weakness as completed or ongoing, and key milestones with completion dates.According to TVA policy, the agency should document weaknesses found during security assessments and document any planned remedial actions to correct any deficiencies. TVA did not always address known significant deficiencies in its remedial action plans. The agency had developed a plan of action and milestones for its transmission control system; however, it did not do so for the control systems at the fossil, hydroelectric, or nuclear facilities. In addition, while the agency tracks weaknesses identified by the TVA Inspector General for its transmission control system, it did not include these weaknesses in its plan of action and milestones. Until the agency implements an effective remediation process for all control systems, it will not have assurance that the proper resources will be applied to known vulnerabilities or that those vulnerabilities will be properly mitigated. Even strong controls may not block all intrusions and misuse, but organizations can reduce the risks associated with such events if they take steps to promptly detect, report, and respond to them before significant damage is done. In addition, analyzing security incidents allows organizations to gain a better understanding of the threats to their information and the costs of their security-related problems. Such analyses can pinpoint vulnerabilities that need to be eliminated so that they will not be exploited again. Incident reports can be used to provide valuable input for risk assessments, can help in prioritizing security improvement efforts, and can illustrate risks and related trends for senior management. FISMA and NIST guidance require that agency information security programs include procedures for detecting, reporting, and responding to security incidents, including reporting them to the U.S. Computer Emergency Readiness Team (US-CERT). Furthermore, NIST guidance prescribes network and host-based intrusion detection systems as a means of protecting systems from the threats that come with increasing network connectivity. TVA had developed incident detection, response, and reporting procedures. However, while the TVA organization responsible for operating its transmission control center had approved incident response and reporting procedures, the agencywide incident response and reporting procedure remained in draft form, although it is currently being used by TVA information security personnel. According to agency officials, the procedure is being revised and finalized to align with incident reporting guidelines developed by US-CERT. Until TVA finalizes these procedures, it cannot be assured that facilities are prepared to respond to and report incidents in an effective manner. Contingency planning includes developing and testing plans and activities so that when unexpected events occur, critical operations can continue without disruption or can be promptly resumed and that critical and sensitive data are protected. If contingency planning controls are inadequate, even relatively minor interruptions can result in a loss of system function and expensive recovery efforts. For some TVA control systems, system interruptions or malfunctions could result in loss of power, injuries, or loss of life. Given these severe implications, it is critical that an entity have in place (1) procedures for protecting information systems and minimizing the risk of unplanned interruptions and (2) a plan to recover critical operations should interruptions occur. To determine whether recovery plans will work as intended, they should be tested periodically in disaster-simulation exercises. FISMA requires that each federal agency implement an information security program that includes plans and procedures to ensure continuity of operations for information systems that support the operation and assets of the agency. TVA had taken steps to address contingency planning for physical incidents such as fire, explosion, and natural disasters, and for other events such as cyber incidents. At the facilities we reviewed, staff performed regular drills and tests to address physical contingencies. According to agency officials, in many cases, these same drills are applicable to cyber incidents that could have physical consequences. In addition, the agency had developed backup procedures for key information resources, including those that support its control systems. In TVA’s transmission control centers, written backup procedures existed; however, in the hydroelectric, coal, and gas turbine facilities we reviewed, the backup procedures were not documented. Until TVA consistently documents backup procedures across all of its facilities, it has limited assurance that all TVA facilities will be able to respond appropriately in the event of a physical or cyber incident. TVA’s power generation and transmission critical infrastructures are important to the economy of the southeastern United States and the safety, security, and welfare of millions of people. Control systems are essential to the operation of these infrastructures; however, multiple information security weaknesses existed in both the agency’s corporate network and individual control systems networks and devices. As a result, although TVA had implemented multiple layers of information security controls to protect its critical infrastructures, such as segmenting control systems networks from the corporate network, in many cases, these layers were not as effective as intended. An underlying cause for these weaknesses is that the agency had not consistently implemented its information security program throughout the agency. If TVA does not take sufficient steps to secure its control systems and implement an information security program, it risks not being able to respond properly to a major disruption that is the result of an intended or unintended cyber incident, which could affect the agency’s operations and its customers. To improve the implementation of information security program activites for the control systems governing TVA’s critical infrastructures, we are recommending that the Chief Executive Officer of TVA take the following 19 actions: Establish a formal, documented configuration management process for changes to software governing control systems at TVA hydroelectric and fossil facilities. Establish a patch management policy for all control systems. Establish a complete and accurate inventory of agency information systems that includes each TVA control system either as a major application, or as a minor application to a general support system. Categorize and assess the risk of all control systems. Update the transmission control system risk assessment to include the risk associated with vulnerabilities identified during security testing and evaluations and self-assessments. Revise TVA information security policies and procedures to specifically mention their applicability to control systems. Ensure that any division-level information security policies and procedures established to address industry regulations or guidance are consistent with, refer to, and are fully integrated with TVA corporate security policy and federal guidance. Revise the intergroup agreements between TVA’s Transmission and Reliability Organization and its fossil and hydroelectric business units to explicitly define cyber security roles and responsibilities. Revise TVA patch management policy to clarify its applicability to control systems and network infrastructure devices, provide guidance to prioritize vulnerabilities based on criticality of IT resources, and define situations where it would be appropriate to upgrade or downgrade a vulnerability’s priority from that given by industry standard sources. Finalize draft TVA physical security standards. Complete system security plans that cover all control systems in accordance with NIST guidance and include all information required by NIST in security plans, such as the FIPS 199 category and the certification and accreditation status of connected systems. Enforce a process to ensure that employees who do not complete required security awareness training cannot access control system-specific networks. Ensure that all designated employees complete role-based security training and that this training includes relevant technical topics. Develop and implement a TVA policy to ensure that periodic (at least annual) assessments of control effectiveness use NIST SP 800-53A for major applications and general support systems. Perform assessments of control effectiveness following the methodology in NIST SP 800-53A. Develop and implement remedial action plans for all control systems. Include the results of inspector general assessments in the remedial action plan for the transmission control system. Finalize the draft agencywide cyber incident response procedure. Document backup procedures at all control system facilities. In a separate report designated “Limited Official Use Only,” we are also making 73 recommendations to the Chief Executive Officer of TVA to address weaknesses in information security controls. In written comments on a draft of this report, the Executive Vice President of Administrative Services for TVA agreed on the importance of protecting critical infrastructures and described several actions TVA has taken to strengthen information security for control systems, such as centralizing responsibility for cyber security within the agency. The Executive Vice President concurred with all 19 recommendations in this report and provided information on steps the agency was taking to implement the recommendations. A copy of the agency’s response is included in appendix II. Additionally, in a meeting with GAO officials, TVA officials expressed concerns about the level of detail in this report. Based on that meeting and subsequent discussions with agency officials, we have modified the wording in this report to address the agency’s concerns. The agency also provided technical comments that we have incorporated where appropriate. We are sending copies of this report to OMB, the TVA Inspector General and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http:www.gao.gov. If you have any questions on matters discussed in this report, please contact Gregory Wilshusen at (202) 512-6244 or Nabajyoti Barkakati (202) 512-4499, or by e-mail at wilshuseng@gao.gov and barkakatin@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. The objective of our review was to determine if the Tennessee Valley Authority (TVA) has effectively implemented appropriate information security practices for the control systems used to operate its critical infrastructure. We conducted our review using our Federal Information System Controls Audit Manual, a methodology for reviewing information system controls that affect the confidentiality, integrity, and availability of computerized data. We focused our work on the control systems located at six TVA facilities. These facilities were selected to provide a cross-section of the variety of control systems by type of generation facility (coal, combustion turbine, hydroelectric, and nuclear) and function (generation and transmission). To evaluate the effectiveness of TVA’s information security practices, we conducted tests and observations using federal guidance, checklists, and vendor best practices for information security. Where federal requirements or guidelines, including National Institute of Standards and Technology (NIST) guidance, were applicable, we used them to assess the extent to which TVA had complied with specific requirements. Specifically, we used NIST guidance for the security of federal information systems. For example, we analyzed the password hashing implementation used for identification and evaluated and reviewed the complexity and expiration of passwords on servers to determine if strong password management was enforced; examined user and application system authorizations to determine whether they had more permissions than necessary to perform their assigned functions; analyzed system configurations to determine whether sensitive data were observed whether system security software was configured to log successful system changes; inspected key servers, workstations, and network infrastructure devices to determine whether critical patches had been installed or were up-to-date; tested and observed physical access controls to determine if computer facilities and resources were being protected from espionage, sabotage, damage, and theft; and synthesized the information obtained about networks and applications to develop an accurate understanding of overall network and system architecture. The Federal Information Security Management Act of 2002 (FISMA) establishes key elements of an effective agencywide information security program. We evaluated TVA’s implementation of these key elements by reviewing TVA’s system inventory to determine whether it contained an accurate and comprehensive list of control systems; analyzing risk assessments for key TVA systems to determine whether risks and threats were documented; examining security plans to determine if management, operational, and technical controls were in place or planned and whether these security plans were updated; analyzing TVA policies, procedures, practices, and standards to determine their effectiveness in providing guidance to personnel responsible for securing information and information systems; inspecting training records for personnel with significant responsibilities to determine if they received training commensurate with those responsibilities; analyzing test plans and test results for key TVA systems to determine whether management, operational, and technical controls were adequately tested at least annually and were based on risk; evaluating TVA’s process to correct weaknesses and determining whether remedial action plans complied with federal guidance; and examining contingency plans for key TVA systems to determine whether those plans had been tested or updated. To conduct our work, we reviewed and analyzed relevant documentation and held discussions with key security representatives, system administrators, and management officials to determine whether information system controls were in place, adequately designed, and operating effectively. We also reviewed previous reports issued by the TVA Inspector General’s Office. We conducted this performance audit from March 2007 to April 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individuals named above, Nancy DeFrancesco and Lon Chin, Assistant Directors; Angela Bell; Bruce Cain; Mark Canter; Heather Collins; West Coile; Kirk Daubenspeck; Neil Doherty; Vijay D’Souza; Nancy Glover; Sairah Ijaz; Myong Kim; Stephanie Lee; Lee McCracken; Duc Ngo; Sylvia Shanks; John Spence; and Chris Warweg made key contributions to this report. | Securing the control systems that regulate the nation's critical infrastructures is vital to ensuring our economic security and public health and safety. The Tennessee Valley Authority (TVA), a federal corporation and the nation's largest public power company, generates and distributes power in an area of about 80,000 square miles in the southeastern United States. GAO was asked to determine whether TVA has implemented appropriate information security practices to protect its control systems. To do this, GAO examined the security practices in place at several TVA facilities; analyzed the agency's information security policies, plans, and procedures against federal law and guidance; and interviewed agency officials who are responsible for overseeing TVA's control systems and their security. TVA has not fully implemented appropriate security practices to secure the control systems and networks used to operate its critical infrastructures. Both its corporate network infrastructure and control systems networks and devices were vulnerable to disruption. The corporate network was interconnected with control systems networks GAO reviewed, thereby increasing the risk that security weaknesses on the corporate network could affect those control systems networks. On TVA's corporate network, certain individual workstations lacked key software patches and had inadequate security settings, and numerous network infrastructure protocols and devices had limited or ineffective security configurations. In addition, the intrusion detection system had significant limitations. On control systems networks, firewalls reviewed were either inadequately configured or had been bypassed, passwords were not effectively implemented, logging of certain activity was limited, configuration management policies for control systems software were inconsistently implemented, and servers and workstations lacked key patches and effective virus protection. In addition, physical security at multiple locations did not sufficiently protect critical control systems. As a result, systems that operate TVA's critical infrastructures are at increased risk of unauthorized modification or disruption by both internal and external threats. An underlying reason for these weaknesses is that TVA had not consistently implemented significant elements of its information security program. Although TVA had developed and implemented program activities related to contingency planning and incident response, it had not consistently implemented key activities related to developing an inventory of systems, assessing risk, developing policies and procedures, developing security plans, testing and monitoring the effectiveness of controls, completing appropriate training, and identifying and tracking remedial actions. For example, the agency lacked a complete inventory of its control systems and had not categorized all of its control systems according to risk, thereby limiting assurance that these systems were adequately protected. Agency officials stated that they plan to complete these risk assessments and related activities but have not established a completion date. Key information security policies and procedures were also in draft or under revision. Additionally, the agency's patch management process lacked a way to effectively prioritize vulnerabilities. TVA had only completed one system security plan, and another plan was under development. The agency had also tested the effectiveness of its control systems' security using outdated federal guidance, and many control systems had not been tested for security. In addition, only 25 percent of relevant agency staff had completed required role-based security training in fiscal year 2007. Furthermore, while the agency had developed a process to track remedial actions for information security, this process had not been implemented for the majority of its control systems. Until TVA fully implements these security program activities, it risks a disruption of its operations as a result of a cyber incident, which could impact its customers. |
The Energy Policy and Conservation Act (EPCA), enacted in 1975, established CAFE standards with the goal of reducing oil consumption. EPCA required manufacturers to meet a single fleetwide CAFE standard for all cars and either a single standard or class standards for light trucks. The act provided the U.S. Department of Transportation (DOT) with the authority to administer the CAFE program, and DOT delegated that authority to NHTSA. In addition, other federal agencies have played a role in the CAFE program (see table 1). For example, under EPCA, EPA is responsible for the development of CAFE testing and calculation procedures. When it was enacted, EPCA specified that the standard for passenger cars would be 18 miles per gallon (mpg) in 1978, rising to 27.5 mpg by 1985, but it permitted NHTSA to determine the standard for light trucks through rulemakings. As required in EPCA, NHTSA began setting CAFE standards for light trucks at the “maximum feasible level” and made incremental increases to these standards from 1979 through 1996. During that time, the light truck CAFE standard increased from 17.9 mpg to 20.7 mpg. However, from fiscal years 1996 through 2001, NHTSA was barred from using appropriated funds made available in DOT’s appropriation to raise CAFE standards. The CAFE standard for cars remained at the 1985 setting of 27.5 mpg through model year 2010. The first increase in CAFE standards for cars since 1985 will take place for model year 2011 cars. After years of little CAFE-related activity or movement in the two standards, several changes took place. According to NHTSA officials, DOT requested that the appropriations ban be lifted so that they could raise CAFE standards. The ban was lifted beginning in fiscal year 2002, and in 2003, NHTSA promulgated increased CAFE standards for light trucks for model years 2005 to 2007. In 2006, NHTSA issued another rule to increase and reform the standards for light trucks, which we refer to as the model year 2008 through 2011 light truck standards. In this light truck rulemaking, NHTSA transitioned from a single CAFE standard applicable to each manufacturer’s fleet to a reformed, attribute-based standard based on a vehicle’s “footprint,” or the size of its wheelbase multiplied by its average track width. The move from a single standard for all light trucks to attribute-based standards for each light truck vehicle model based on a vehicle’s footprint was designed to address a number of downsides to “unreformed” CAFE standards, including potential safety implications and consumer choice limitations. The Energy Independence and Security Act of 2007 (EISA) amended EPCA to require not only light truck but a passenger car standards to be based on an attribute-based curve and for the fuel economy of the entire industrywide fleet—including cars and light trucks—to reach an average of 35 mpg by model year 2020. Subsequent to the enactment of EISA, in 2008, NHTSA proposed CAFE standards based on vehicle footprints for passenger cars and light trucks for model years However, a final rule was issued only for model year 2011 through 2015. lso 2011 standards in March 2009—a rulemaking effort and CAFE standard that we refer to as the model year 2011 CAFE standard. The goal of this final rule was to reach an estimated fleet average—or target—of 30.2 mpg for cars and 24.1 mpg for light trucks in model year 2011. In recent years, public concerns have grown about the relationship between the emission of GHGs and global climate change. According to the Intergovernmental Panel on Climate Change—a United Nations organization—global atmospheric concentrations of GHGs have increased as a result of human activities, contributing to a warming of the earth’s climate. If unchecked, this could have serious negative effects, such as rising sea levels and coastal flooding worldwide. Automobiles represent a significant share of GHG emissions. According to EPA, in 2007, personal vehicle use accounted for 17 percent of total GHG emissions in the U.S. In 2007, the United States Supreme Court ruled that EPA has the statutory authority to regulate GHG emissions from new motor vehicles under the Clean Air Act (CAA) because greenhouse gases meet the CAA’s definition of an air pollutant. Furthermore, the Supreme Court held that EPA must regulate GHGs as such if EPA finds them to be an endangerment to public health or welfare. Subsequent to this decision, EPA issued a final Endangerment Finding of GHG emissions in December 2009, laying the foundation for setting GHG emissions standards for vehicles. In addition, in 2005, citing compelling and extraordinary impacts of climate change on the state, California filed a request with EPA for a waiver of CAA preemption to set GHG emissions standards for new motor vehicles starting in the 2009 model year. The CAA directs EPA to grant a waiver unless EPA finds (1) the state’s protectiveness determination was arbitrary and capricious, (2) the state’s standards are not needed to meet “compelling and extraordinary conditions,” or (3) the state’s standards are inconsistent with section 202(a) of the CAA (provisions related to technical feasibility and lead time to manufacturers). Under certain conditions set forth in the CAA, other states may adopt California’s motor vehicle emissions standards. The automobile industry brought litigation in several states, including California, alleging, among other claims, that the state standards were preempted by EPCA (which preempts state standards relating to fuel economy). Federal district courts in Vermont and California ruled against such claims, in the only two cases to be judged on their merits to date. California’s waiver request was initially denied by the prior administration. EPA determined that California’s standards were not needed to meet compelling and extraordinary conditions, as required by the CAA, because global climate change and local or regional factors represent different causal links affecting air pollution in California—and previous waivers have addressed only the local or regional air pollution problems. In addition, EPA found that the effects of climate change in California are not compelling and extraordinary when compared to the rest of the country. GAO found in January 2009 that the “compelling and extraordinary” test had never before been used to completely deny a waiver request. The current administration also found that the denial was a historical anomaly, reconsidered the request, and granted the waiver in June 2009 after finding that it should not have been denied under any of the statutory factors. Petition for review of this decision filed by the U.S. Chamber of Commerce and the National Automobile Dealers Association is now pending in front of the U.S. Court of Appeals for the District of Columbia Circuit. In response to the EISA’s call for higher CAFE standards and California and other states’ desire to establish fuel economy or GHG emissions standards, the current administration announced its National Fuel Efficiency Policy in May 2009. This policy involves setting higher CAFE standards for model years 2012 through 2016 for cars and light trucks, as well as new GHG emissions standards by EPA during this same period. As a result, NHTSA and EPA are conducting a joint rulemaking to increase CAFE standards and set new GHG emissions standards. (See fig. 1 for a timeline of major CAFE and GHG emissions standards milestones.) The proposed joint rule would increase CAFE standards to achieve an estimated fleetwide average of 34.1 mpg and implement GHG emissions standards to achieve an estimated fleetwide average of 250 grams per mile (g/mi) of carbon dioxide (CO) by model year 2016. The agencies jointly issued a Notice of Upcoming Joint Rulemaking in May 2009, issued a Proposed Rulemaking and held three public hearings across the country in September 2009, held a 60-day public comment period that ended in November 2009, and plan to issue the final rules by April 1, 2010. (Fig. 2 shows the changes to CAFE standards over time, including the proposed e proposed standards). standards). In the proposed rule, NHTSA and EPA estimate that the proposed standards will result in both benefits and costs: Potential benefits for consumers and society. The agencies estimate that the new standards will result in approximately 1.8 billion barrels of oil savings and 950 million metric tons of carbon dioxide emissions reductions over the lifetime of vehicles sold in model years 2012 through 2016. In addition, the agencies estimate that new and more fuel-efficient vehicles will save consumers more than $4,000 in gasoline costs over a model year 2016 vehicle’s lifetime. Potential costs for consumers, automobile manufacturers, and others. The agencies estimate that the proposed standards would require manufacturers to incorporate additional fuel-saving technology into vehicles, which would increase the average cost of a model year 2016 vehicle by around $1,100. As a result, this will increase the purchase price of vehicles for consumers, or manufacturers will receive lower profits from vehicle sales, or both. However, the agencies estimate that the total benefits of the proposed standards will outweigh the costs, providing net benefits to society of nearly $200 billion over the lifetimes of the model year 2012 to 2016 vehicles. In addition, the estimated lifetime fuel savings exceeds the $1,100 increase in vehicle cost for a model year 2016 vehicle, yielding a net savings of about $3,000 for consumers. Although the proposed CAFE and GHG emissions standards are distinct and automobile manufacturers will be subject to both sets, EPA and NHTSA have worked to develop standards that are aligned (what the agencies refer to as “harmonized”) with the intention that manufacturers can build one fleet of vehicles to comply with both sets of standards. This should lower the cost of compliance for manufacturers compared to a case in which the standards were set separately and without regard for the other’s design. This harmonization is possible because fuel economy and GHG emissions have a clear and direct relationship—specifically, vehicle tailpipe carbon dioxide emissions are directly related to the quantity of fuel burned. Given the relationship between GHG emissions and fuel economy, actions to increase fuel economy also necessarily reduce GHG emissions; therefore, manufacturers can use the same technologies to help meet both standards. NHTSA and EPA have proposed standards for both passenger cars and light trucks that are based on vehicle footprint so that each vehicle is subject to a target level based on its footprint, with smaller vehicles having a stricter target (see fig. 3). The footprint-based standard is applied to individual vehicle models based on the size of each vehicle. Because each manufacturer sells a different mix of vehicle sizes, under the proposed standards each manufacturer will have different CAFE and GHG emissions standards. NHTSA first adopted a footprint-based approach—as opposed to a single fleetwide standard—for model year 2008 through 2011 light truck standards. A number of the experts we interviewed supported the current approach of subjecting both passenger car and light truck fl eets to footprint-based standards. In the model year 2008 through 2011 light truck rule, NHTSA cited several potential benefits of a footprint-based approach over a single, fleetwide CAFE standard, including the following: Larger reductions in oil consumption. Oil consumption would be reduced because automakers would be required to improve the fuel economy of vehicles of all sizes rather than only those near the standard. Enhanced safety. Manufacturers would not have an incentive to comply with CAFE standards by pursuing strategies that compromise safety— such as (1) reducing the size of vehicles (applicable fuel-economy targets now become higher as size decreases) or (2) designing models to be classified as light trucks rather than cars, which can increase a vehicle’s propensity to roll over—in order to comply with CAFE standards. Under a single standard, manufacturers could reduce vehicle size as one approach for CAFE compliance. More even disbursement of the regulatory cost burden. Fuel-economy improvements would be spread across the industry, instead of concentrating on manufacturers of heavier, lower fuel-economy vehicles. Addressing concerns about consumer choice. Manufacturers now must improve the fuel economy of all light trucks, regardless of size, which addresses criticisms that single, fleetwide CAFE standards were hindering the efforts of some companies to offer a mix of vehicles matching consumer desires. For instance, under the previous system, instead of installing more fuel-saving technologies across their fleets, manufacturers might have moved toward building fewer large vehicles and more small vehicles to meet new CAFE standards, even though consumers typically have not demanded small vehicles. In a footprint-based standard, manufacturers must improve the fuel economy of all light trucks, no matter their size. The CAFE requirement for each manufacturer—which is the basis for determining compliance—will be determined at the end of the model year based on actual production. For example, manufacturers selling a greater proportion of large vehicles will have a lower average target to meet than will manufacturers focusing on smaller vehicles. Based on estimated sales projections, the proposed targets are estimated to achieve an average of 34.1 mpg across all model year 2016 vehicles sold. While NHTSA and EPA expect benefits from adopting a standard based on vehicle footprint and predict that the administration’s goal of a fleetwide average 34.1 mpg and 250 grams per mile carbon dioxide in 2016 will be met, there is no guarantee that a specific national target will be achieved. This is a tradeoff of adopting a footprint standard compared to the single national CAFE standard NHTSA used in the past. Because the actual fleetwide fuel-economy levels will depend on actual vehicle sales— specifically, the size of cars consumers buy—there is the possibility that the actual fleetwide mpg in 2016 will be higher or lower and realized costs and benefits of the standards will be higher or lower than estimated. For example, even though all of the vehicles in each manufacturer’s fleet may be in compliance with its footprint-based requirement, manufacturers may sell a greater number of large-footprint vehicles than predicted, which would lower each manufacturer’s CAFE requirement. If this is the case, the national fleet may not reach the target of 34.1 mpg by 2016, and the estimated benefits of the standards, which assume achieving a national fleetwide average of 34.1 mpg, would not be fully realized. The opposite, however, could also be the case. If a greater number of smaller vehicles (generally with higher CAFE levels) are sold than expected, manufacturers will have higher CAFE requirements, the national fleet may exceed the target of 34.1 mpg, and estimated benefits assuming a fleetwide average of 34.1 mpg would be exceeded (see fig. 4). Similar scenarios could occur with respect to EPA’s GHG standards. Several key differences between the EPA and NHTSA standards largely arise from the legal authorities under which the standards are set. NHTSA’s authority to administer the CAFE program is derived from EPCA, as amended by EISA, requires that NHTSA, for passenger cars and light trucks in each future model year, establish standards at “the maximum feasible average fuel-economy level that it decides manufacturers can achieve in that model year.” EPCA further directs NHTSA to make this determination based on consideration of four statutory factors: technological feasibility, economic practicability, the effect of other standards of the government on fuel economy, and the need of the nation to conserve energy. However, the law does not direct NHTSA on how to balance these four factors—which can conflict—thereby giving NHTSA discretion to define, give weight to, and balance the four factors based on the circumstances in each CAFE rulemaking. Furthermore, how NHTSA balances these four factors can vary from rulemaking to rulemaking. For example, in the model year 2012 through 2016 rulemaking, NHTSA cited economic practicability concerns—given the state of the economy and the financial state of automakers—to set standards at a level lower than it otherwise could have in accordance with Office of Management and Budget (OMB) guidelines on federal regulatory impact analysis. In addition to the four statutory factors, NHTSA also considers the potential for adverse safety consequences and consumer demand when establishing CAFE standards. EPA’s authority to set GHG standards is derived from the CAA, which authorizes EPA to regulate emissions of air pollutants from all mobile source categories. EPA must prescribe standards for the emission of any air pollutant from motor vehicles which causes or contributes to air pollution that endangers public health or welfare. In prescribing these statutory standards, EPA considers such issues as technology effectiveness, cost of compliance, the lead time necessary to implement the technology, safety, energy impacts associated with the use of the technology, and other impacts on consumers. EPA has the discretion to consider and weigh these various factors, particularly those related to issues of technical feasibility and lead time. Some differences affect the process each agency must use to set standards, which in turn leads to key differences between the standards. For example, EPCA requires that EPA, in testing fuel economy of passenger vehicles, use 1975 test procedures or procedures that give comparable results under which air conditioning is not turned on. As a result, manufacturers cannot realize the benefits of air conditioning improvements for complying with CAFE standards, and NHTSA has, to date, not taken into account air conditioning improvements when setting CAFE standards. Under the CAA, however, EPA is not subject to the same limitations, and its proposed GHG standards account for air conditioner improvements. Specifically, the mpg equivalent of EPA’s 2016 target of 250 g/mi of COThis creates potential challenges to harmonization and for manufacturers attempting to manage the design of a fleet. For example, EPA’s proposed GHG standards offer a “temporary lead time” mechanism for manufacturers that sell a limited number of vehicles in the U.S. Although this specific flexibility does not exist in the CAFE standards, under EPCA, NHTSA may exempt qualifying small-volume manufacturers (defined as manufacturers that produce under 10,000 vehicles worldwide annually) from the passenger car standard for a model year. As a result, manufacturers that are able to take advantage of EPA’s temporary lead time mechanism to comply with GHG standards may face challenges in complying with CAFE standards. Some experts we met with said that these inconsistencies in flexibility mechanisms between the two sets of standards may present challenges to some manufacturers in meeting the harmonized standards. Mechanisms available for enforcing the standards also differ between the two agencies due to statutory differences. For example, the Clean Air Act prohibits the sale of vehicles without a certificate of conformity from EPA which indicates that the vehicle meets applicable emission standards. If EPA determines that a vehicle does not meet the emission standards, it may not issue a certificate, thus preventing the manufacturer from legally selling the vehicle. The Clean Air Act also gives EPA authority to recall noncompliant vehicles. NHTSA can take neither of these actions. Because a CAFE standard applies to a manufacturer’s entire fleet for a model year, CAFE fines are assessed for the entire noncomplying fleet. Pursuant to EPCA, fines associated with CAFE noncompliance are currently $5.50 for every tenth of an mpg a manufacturer’s fuel economy is short of the standard multiplied by the number of vehicles in a manufacturer’s fleet for a given model year. NHTSA recognizes that some manufacturers regularly pay fines instead of complying with CAFE standards; in particular, many European manufacturers pay fines each year. Fines for CAFE standards have not been increased since 1997, and GAO has reported that, as a result, CAFE penalties may not provide a strong enough incentive for manufacturers to comply with CAFE. NHTSA officials noted that under EPCA, NHTSA has the authority to raise the fines up to $10 per tenth of an mpg. However, raising fines requires an analysis finding that substantial energy conservation would result and that raising fines would not have substantially deleterious impact on the U.S. economy. GAO has recommended that agencies collecting penalties regularly conduct these types of analyses. In contrast to CAFE fines, penalties for violation of a motor vehicle emission standard under the CAA, which may be much higher, are determined on a per-vehicle basis. The CAA gives EPA broad authority to levy fines and require manufacturers to remedy vehicles if the agency determines there are a substantial number of noncomplying vehicles. EPA must consider an assortment of factors, such as the gravity of the violation, the economic impact of the violation, the violator’s history of compliance, and other matters, in determining the appropriate penalty. The CAA does not authorize manufacturers to intentionally pay fines as an alternative to compliance, and EPA does not include in its standard-setting modeling analysis the option for manufacturers to pay fines instead of compliance. Manufacturers may be subject to fines as high as $37,500 per vehicle under Section 205 of the CAA. Given that fines for noncompliance with GHG standards may be higher than fines for noncompliance with CAFE, having harmonized standards may provide incentives to manufacturers that have traditionally chosen to pay CAFE penalties instead of complying with standards, to comply with both sets of standards. In conducting the joint rulemaking, the agencies have collaborated on major tasks. For example, the two agencies coordinated time frames so that key milestones of each rulemaking—such as issuance of the Proposed Rulemaking and time frames for public comment—happened at the same time. This enabled manufacturers to learn about both new standards at the same time and plan appropriately. Officials of both agencies told us that staff from both agencies met on a regular basis, often daily, to coordinate their efforts throughout the rulemaking process. In addition, according to agency officials, the two agencies formed a number of joint technical teams to examine data used in modeling efforts—for instance, one team examined data on automotive technology that can improve fuel economy and reduce GHG emissions—to ensure that both agencies were using similar data and making similar assumptions to develop standards. As a result of these efforts, each agency had significant input into the development of both sets of standards. EISA mandated NHTSA to consult with both EPA and the Department of Energy (DOE) in prescribing CAFE standards beginning with model year 2011. NHTSA’s use of EPA’s expertise in environmental issues and DOE’s expertise in energy efficiency in informing CAFE standards is important given CAFE’s environmental and energy-security implications. For example, NHTSA has prepared draft and final environmental impact statements, as required by the National Environmental Policy Act, discussing the environmental implications of recent CAFE rulemakings, and EPA has reviewed and provided input on that work. However, EPA’s role in the joint CAFE and GHG emissions rulemaking goes beyond the EISA requirement for consultation. For example, EISA does not require either EPA or DOE to participate in CAFE rulemaking at as high a level as EPA has in the current joint CAFE and GHG emissions rulemaking. This level of EPA involvement in the proposed 2012 through 2016 CAFE and GHG rulemaking is greater than EPA’s involvement in previous CAFE rulemakings, particularly prior to NHTSA’s proposal of CAFE standards for model year 2011. For the model year 2011 proposal, NHTSA and EPA staff jointly assessed which technologies would be available for those model years and their effectiveness and cost. They also jointly assessed key economic and other assumptions affecting the stringency of future standards. Finally, they worked together in updating and further improving the model that had been used to help determine the stringency of the model year 2008 through 2011 light truck standards. However, even in the rulemaking for model year 2011, EPA did not devote as many resources or have as much involvement in setting CAFE standards as it did in the model year 2012 through 2016 proposed CAFE and GHG rulemaking. The increased involvement by EPA as an equal partner in the proposed model year 2012 through 2016 CAFE and GHG emissions rulemaking came at the direction of the current administration, when it announced plans to increase CAFE standards and introduce GHG emissions standards for vehicles. EPA officials noted that the involvement of the White House and clear directives to both the Secretary of Transportation and Administrator of EPA for a collaborative approach caused both agencies to commit to the joint process, which officials viewed as successful. To determine the appropriate level of CAFE and GHG emissions standards, NHTSA and EPA each conducted its own regulatory impact analysis using computer models. NHTSA used a model developed by the Volpe National Transportation Systems Center (referred to as the Volpe model), earlier versions of which have been used in previous CAFE rulemakings. The model estimates the costs and benefits to manufacturers, consumers, and society of differing levels of CAFE standards. (See app. II for an in-depth description of NHTSA’s Volpe model.) EPA developed a similar model called the Optimization Model for Reducing Emissions of Greenhouse Gases from Automobiles (OMEGA) to conduct a similar analysis of and inform its proposed GHG standards. While the models are distinct from one another, and NHTSA and EPA each conducted its own modeling, the two agencies collaborated on and coordinated this work. In particular, the OMEGA model and Volpe model generally used consistent data inputs and assumptions—for example, the same economic assumptions and, to the extent possible given structural differences between the models, consistent data on vehicle fleets and fuel- saving technologies. According to officials from both agencies, the two agencies worked closely together to develop these data inputs and assumptions. NHTSA’s and EPA’s analyses are also structured similarly and have two components—one that attempts to determine manufacturer response to the standards and another that estimates the effects of the proposed standards on manufacturers, consumers, and society. In addition, although the two models differ in several ways, analyses conducted with each model produced similar results, helping to validate each modeling effort. Some differences involve the treatment of compliance flexibilities or credits—mechanisms created in a standard to reduce the cost of compliance for manufacturers. Other differences involve how the models account for manufacturers conducting multiyear product planning and how technologies were carried over between model years. Both NHTSA and EPA conducted analyses of the respective effects of the proposed CAFE and GHG standards. However, despite differences between the two models, the aggregate results were largely similar. Although NHTSA contributed research to the rulemaking process, it faced challenges in doing so. NHTSA contributed research on fuel efficiency and costs. For example, NHTSA officials said that they conducted new research related to estimating the rebound effect and the costs of oil imports. In 2008, during the development of the model year 2011 rule, NHTSA contracted with an automotive consulting firm to review comments from stakeholders during the public comment period of the rulemaking, which resulted in some technology costs being updated. NHTSA officials said that this work helped improve its analysis. NHTSA also contributed safety research. However, NHTSA has not recently undertaken new safety research to support the current proposed standards, despite significant and ongoing controversy over vehicle safety and CAFE standards, as well as changes in technology available to reduce vehicle weight. According to NHTSA officials, NHTSA has made such research a priority for the near-future in order to support future CAFE rulemaking. In addition, while NHTSA contracted with the National Academy of Sciences (NAS) to provide an updated report on the costs of fuel-saving technologies, and NAS held its first public meeting for this work in September 2007, this work was not completed in time to support analysis for the Notice of Proposed Rulemaking. EISA mandated NHTSA to contract with NAS to receive updates to its earlier report of fuel-saving technology cost and effectiveness in 5-year intervals until 2025. We noted in previous work that both experts and NHTSA officials said it would be ideal to complete and update such work before NHTSA issues a new car or light truck fuel-economy standard. Also, NAS work on technology costs in 2002 was generally viewed by a wide range of experts as being thorough and unbiased. While NAS indicated in a preliminary report that it would finish its work by spring 2008, according to NAS officials, they required more time to acquire technology cost data than initially anticipated. As a result, the final NAS study has not yet been published and was not available to inform analysis for EPA and NHTSA’s September 2009 Notice of Proposed Rulemaking. EPA contributed research in time to provide analysis for the proposed rule. It also contributed funding to a greater degree, especially when compared with past CAFE rulemakings where EPA’s role was limited to consulting. For example, EPA conducted or contracted for three peer- reviewed studies to support the rulemaking and the modeling efforts. According to EPA officials, these studies included an ongoing $1.1 million study done in conjunction with a consulting firm to determine the direct manufacturing costs of fuel-saving and GHG emissions-reducing technologies—a key input in both agencies’ models; a $40,000 assessment of indirect costs of manufacturing more fuel-efficient a $1 million vehicle simulation modeling study done in conjunction with a consulting firm to refine estimates of emissions reduction and fuel- economy improvements stemming from combinations of technology. These studies provided the analysis of both CAFE and GHG standards with updated information and data. The difference in the extent of new research that NHTSA and EPA conducted for this rulemaking likely results from differences in resources available to the agencies in the recent past. As we mentioned previously, from fiscal years 1996 to 2001—about 6 years—NHTSA was prohibited from using appropriated funds to change CAFE standards. According to NHTSA, the agency lost staff with expertise in this area as a result and did not begin to hire additional automotive engineers until summer 2009. By comparison, EPA has been able to develop and maintain automotive engineering expertise. This expertise has proved helpful in setting GHG emissions standards for automobiles. For example, EPA has been home to the National Vehicle and Fuel Emissions Laboratory since 1971, and in the early 1990s, it expanded its activities to conduct research and development of technologies used to reduce emissions, which are often marketed and licensed to the automobile industry. Although NHTSA brings safety expertise to CAFE standards, which has been a concern with raising CAFE standards in the past, the agency’s primary mission and expertise is in vehicle safety, not vehicle power train design and the impact of vehicle emissions on the environment. Thus NHTSA cannot be expected to have the same level of in-house expertise related to vehicle power train design and environmental issues as EPA. Although the agencies had to work quickly, the joint proposed model year 2012 through 2016 rulemaking has met all of its milestones to date, and the agencies stated that the collaboration has been successful. This is the first time NHTSA and EPA are conducting a joint rulemaking together. The agencies conducted the joint rulemaking under tight time frames and have met all key milestones, such as publishing information about the rule and receiving and responding to public comments. However, the fast pace has left little time or resources to document any effective or efficient processes so they could be used in the future. From the administration’s May 2009 release of the Notice of Upcoming Joint Rulemaking to the expected release of the rule, less than 11 months will have transpired. By comparison, according to NHTSA officials, other recent CAFE rulemakings have taken a minimum of 14 months. The accelerated timeline in the current rulemaking stemmed in part from the statutory requirement that NHTSA issue new CAFE standards 18 months prior to the beginning of the model year that will be affected and from the current administration’s announcement regarding the development of the new standards in May 2009. In order to issue harmonized standards at the same time, both EPA and NHTSA had to adhere to an accelerated timeline. Despite the dual challenge of conducting a joint rulemaking for the first time and on a compressed timeline, some experts we spoke with thought that the two agencies worked well with each other and hoped they would continue to do so. In addition, both agencies found the collaborative partnership to be successful. The proposed standards cover model years 2012 through 2016, and while it is not clear how fuel economy and GHG emissions will be regulated after 2016, industry stakeholders and others have said that they would like NHTSA and EPA to begin working on the next set of standards in the near future. Officials with the California Air Resources Board said that the state is already considering state GHG emissions standards that would take effect in 2017, and depending on the stringency of federal standards at that time, California may opt to implement its own more stringent standards. Many industry stakeholders we interviewed said that they prefer a national program with harmonized standards over different federal and state standards because multiple standards could substantially increase compliance costs. Some expressed interest in EPA and NHTSA considering CAFE and GHG emissions standards for model years beyond 2016 as soon as possible in order to better ensure harmonized national standards and to give manufacturers appropriate lead time to meet standards. Although we found interest in NHTSA and EPA developing standards for model years beyond 2016, two issues could prevent the agencies from replicating this effort in the future: The processes for coordinating the rulemaking have not been documented by either agency. Documented processes that the two agencies would follow—detailing how each communicated, shared resources, and set plans—would help ensure that best practices are followed and that resources are used efficiently. As GAO has reported, such guidance can aid regulatory programs by improving efficiency and ensure that benchmarks and time frames are met. In addition, by publishing such documentation, the agencies can increase the transparency of their programs and processes. However, the two agencies have not documented the processes for use during future rulemakings, and officials at both agencies report they currently have no plans to do so. EPA officials, however, told us that documenting the processes would be a worthwhile task. The two agencies are not legally required to continue coordinating in setting CAFE and GHG emissions standards. As noted, EISA mandated NHTSA to consult with EPA and DOE in setting CAFE standards beginning with model year 2011. However, NHTSA is not required to work with EPA to the extent it has on this joint rule. The collaboration of these two federal agencies came at the direction of the current administration to provide regulatory certainty and ensure that a clear set of rules was established for all automobile manufacturers. In part because NHTSA has previous experience in setting CAFE standards, we were asked to review any improvements NHTSA made to its process for setting CAFE standards. We did so by looking in depth at NHTSA’s regulatory impact analysis using the Volpe model, which has been used in previous rulemakings as well as the current proposed rule. It has been criticized by some experts in previous rulemakings for, among other things, a lack of transparency that limited public review. Because EPA is setting GHG emissions standards for the first time, we did not conduct a similar review of their modeling efforts using the OMEGA model. The first key component of the Volpe model is a simulation of how manufacturers might comply with proposed CAFE standards. The “compliance simulation” of the Volpe model attempts to simulate each manufacturer’s most cost-effective strategy to make its fleet comply with a more stringent CAFE standard by incorporating technologies until the manufacturer achieves compliance, exhausts all available technologies, or pays fines for noncompliance when it becomes more cost-effective than incorporating additional technologies. It relies on several key sources of data, including the “baseline vehicle fleet,” a forecast of the vehicle models manufacturers will produce for sale in the U.S. in future model years; a list of available fuel-saving technologies, categorized into five groups; estimates of the costs, effectiveness in reducing fuel consumption, applicability, and availability of these technologies; and pathways that estimate available fuel-saving technologies and the order in which manufacturers could take advantage of these technologies to most cost-effectively meet new CAFE standards. This technology simulation is run for each vehicle model in the baseline fleet and produces an estimate of each vehicle’s new fuel economy, weight, and total cost after the manufacturer has modified the vehicle in response to the CAFE standard. The compliance simulation’s output is a forecast of model years 2012 through 2016 vehicles—namely, a re- engineered fleet of vehicles with new prices, fuel types, fuel-economy values, and weights to reflect the changes manufacturers would make to their vehicles to meet the proposed model year 2012 through 2016 CAFE standards. The data for each vehicle in the forecasted model year 2012 through 2016 fleet is then used in the second portion of the analysis. This “calculation of effects” is the second key component of the Volpe model, which uses the compliance simulation data to estimate the costs and benefits of potential changes to the CAFE standard to manufacturers, consumers, and society as a whole. It uses a variety of data inputs, including fuel prices projected for the lifetimes of the vehicles in the fleet, the economic costs of fuel consumption, and damage costs for criteria pollutants. This analysis produces information on the estimated benefits and costs of higher CAFE standards, such as the benefit to consumers of fuel savings from driving more fuel-efficient vehicles, increases in new vehicle prices, changes in the number of vehicle miles traveled, and the societal benefits of reductions in carbon dioxide emissions. The estimated costs and benefits are used by NHTSA to set CAFE standards at a level that appropriately balances their costs and the benefits. To increase the transparency of inputs to the Volpe model for the 2012 through 2016 rulemaking, NHTSA used publicly available data to develop the model’s baseline vehicle fleet. In previous rulemakings, NHTSA developed its baseline fleet by using confidential product plans submitted by manufacturers that described the vehicles manufacturers planned to sell in the U.S. in future years. However, manufacturers submitted these plans to NHTSA as confidential business information, and NHTSA could not make these plans available to the public. Comments submitted as part of prior CAFE rulemakings, as well as several experts we spoke to, indicated that the lack of transparency regarding NHTSA’s use of product plans was troublesome because researchers could not replicate NHTSA’s analysis. In developing their respective models for the joint rulemaking, NHTSA and EPA used a baseline fleet that drew primarily from public and commercially available information to make their analyses more transparent and provide additional validation of the results of their analyses. Specifically, NHTSA and EPA relied almost entirely on information sources such as model year 2008 vehicle sales data, EPA’s emission certification and fuel-economy database, and vehicle sales forecasts from several public sources. There are several advantages of using public and commercially available data more extensively than product plans. First, federal regulatory analysis from OMB recommends that analyses be transparent to allow third parties to determine how the model produces its estimates and conclusions. By increasing the transparency of the baseline vehicle fleet, NHTSA allowed outside experts the opportunity to review the model’s inputs and outputs and replicate the results of the model to better ensure that its analysis is thorough and sound. Second, because the submission of product plans is strictly voluntary, NHTSA has not consistently received complete information from all manufacturers with U.S. sales, which has inhibited its ability to forecast the future vehicle fleet across manufacturers using that data. Although several companies submit nearly complete product plans, others submit only partial plans, while still others do not submit any information. NHTSA also indicated it could save staff time by not having to correct errors in the manufacturers’ submissions that NHTSA does receive. Third, by using actual fuel-economy test data from model year 2008 vehicles, NHTSA would be able to use this verified fuel-economy information, rather than the estimates of the fuel-economy performance from vehicles’ manufacturers. Despite these advantages, there are some disadvantages to using the publicly available model year 2008 data to establish the baseline vehicle fleet. For example, by forecasting the model year 2012 through 2016 vehicle fleet using model year 2008 vehicle data, NHTSA and EPA’s baseline includes vehicles that have been eliminated or for which production has been reduced, such as the Chrysler PT Cruiser and Hummer H2. It also does not include several vehicle models and technologies that manufacturers have recently introduced or plan to introduce, such as Ford’s EcoBoost system (a package of engine technologies that in combination significantly improve fuel economy), the Honda Insight (a conventional hybrid), Chevrolet Volt (a plug-in hybrid electric vehicle), or Nissan’s all-electric LEAF. In addition to specific vehicles, NHTSA’s baseline vehicle fleet forecast does not account for broad-scale changes to vehicle lines that manufacturers have started, such as Chrysler’s plans to use Fiat power trains to offer small and medium- sized cars. Finally, NHTSA has found it difficult to determine, from either public or commercial sources, a number of specific data used in the baseline, such as information on electric power steering and reduced rolling-resistance tires. Consequently, NHTSA has had to use a small amount of data from product plans submitted in spring 2009 to fill these data gaps. NHTSA is also consulting with manufacturers regarding the possible release of model year 2010 or model year 2011 product plans that NHTSA could use in its development and analysis of the final model year 2012 through 2016 standards. Despite these disadvantages, NHTSA, EPA, and several experts we spoke to believe that the new transparency of its analysis outweighs the limitations of using public and commercially available data to establish its baseline. emission control and fuel economy, such as papers published by the Society of Automotive Engineers and the American Society of Mechanical Engineers. In addition, confidential data submitted by vehicle manufacturers in response to NHTSA’s request for product plans, and confidential information shared by automotive industry component suppliers in meetings with EPA and NHTSA staff held during the second half of the 2007 calendar year were used as a cross- check of the public data mentioned above but not as a significant basis for the proposed model year 2012 through 2016 rule. technologies. For example, NHTSA revised the cost of turbocharging and downsizing an engine—a cost range of $512 to $1,098, depending on engine type, compared to the range of $822 to $1,129 used for the model year 2011 CAFE standards—using data available from EPA’s ongoing teardown study with FEV, an automotive research, design, and development company. It also revised the costs of several other key technologies such as cylinder deactivation—a cost range of $28 to $190, compared to the range of $306 to $400 used for the model year 2011 CAFE standards. However, despite this concerted effort, NHTSA and EPA were not able to make further refinements because the anticipated NAS study of vehicle technology was not completed on schedule. Indirect costs to manufacturers. NHTSA adopted research that EPA had contracted for to refine estimates of the indirect costs to manufacturers of manufacturing more fuel-efficient vehicles. These costs include research and development and marketing costs associated with the introduction of a new technology and give decision makers a more comprehensive view of the total costs a manufacturer would incur for implementing new technology than direct costs alone can provide. EPA supplemented an initial contractor report on this subject with an additional in-house study, which involved significant staff resources. The social cost of carbon dioxide emissions. NHTSA adopted an estimate of the damage resulting from carbon dioxide emissions that is more in-line with recent scientific and economic research, leading to a better reflection of the estimated benefits of increased CAFE standards related to reductions in GHG emissions. In the model year 2008 through 2011 light truck rule, NHTSA declined to include an economic value for reducing GHG emissions, citing the wide variation in published estimates of GHG emissions costs. However, a November 2007 federal court decision found that NHTSA’s decision to not provide a monetized estimate of the benefit of reducing GHG emissions was arbitrary and capricious. For the proposed model year 2012 through 2016 standards, NHTSA is using estimates of $5, $10, $20, $34, and $56 per metric ton of carbon dioxide— with an emphasis on the $20 value. These values, also adopted by EPA in its analysis, reflect the current administration’s interim set of estimates of the social cost of carbon for agencies to use in regulatory analyses until a federal interagency working group develops a more comprehensive estimate for use in future economic and regulatory analyses. Projected fuel prices. NHTSA used the most recent and updated projections of fuel prices provided by the Energy Information Administration (EIA) to place a value on the fuel-saving costs and benefits of different CAFE standards. Among other things, the monetized benefits of the new CAFE standards are more sensitive to changes in fuel prices, meaning that the estimated benefits of more stringent CAFE standards will increase or decrease to a greater extent in response to changes in the price of fuel compared to changes in other variables. For the current proposal, NHTSA is using a range of prices from $2.50 in 2011 to $3.82 in 2030, which is consistent with the EIA’s 2009 main fuel price projections, and is focusing on an average retail gas price of $3.77 per gallon in 2007 dollars. In addition, NHTSA is reviewing the EIA’s high and low fuel price projections to determine a range of potential costs and benefits, a best practice recommended by OMB guidance. In projecting fuel prices, EIA considers recent and likely future developments in the world oil market, the effect of the current geopolitical situation on oil supply and prices, and conditions in the domestic fuel supply industry that affect pump prices. However, EIA projections have at times underestimated gas prices, most recently in 2008 during the price spike. Several experts we spoke to noted that gas prices are extremely difficult to predict. However, most of the experts we spoke to also indicated that despite its limitations, EIA is the most credible source for projected fuel prices. Although EIA officials told us they do not issue guidance to agencies on how to use EIA projections in regulatory impact analyses, they expect agencies to consider that events EIA cannot predict will impact energy demand and fuel prices. By applying the best research available, NHTSA should obtain better estimates of the benefits and costs of higher CAFE standards and allow standards to be set at a level better reflecting those benefits and costs. In line with OMB guidance on federal regulatory analysis, NHTSA conducted more thorough analyses in the proposed model year 2012 through 2016 standards than in previous CAFE rulemakings, including the model year 2008 through 2011 light truck rule. First, NHTSA tested and compared the benefits and costs of a greater number of CAFE levels set at different stringencies (also known as alternative scenarios) than it has in the past. By doing so, NHTSA gives decision makers a better picture of which level of CAFE standards provides the best balance between costs and benefits. NHTSA doubled the number of alternative CAFE scenarios it has tested from four to eight since the model year 2008 through 2011 light truck final rule. Specifically, NHTSA considered scenarios in which fuel- economy levels are increased at an annual average rate ranging from 3 to 7 percent, as well as scenarios in which the benefits are modified—for example, selecting a level at which the total costs of new CAFE standards are equal to their total benefits or a level that maximizes the net benefits of new CAFE standards to society. As a result, NHTSA was able to provide more comprehensive information for decision makers and increase public understanding of NHTSA’s process for setting standards. However, NHTSA also considered factors external to the model in determining the level of the proposed model year 2012 through 2016 standards. Although OMB guidance on regulatory analysis specifies that agencies should select the scenario that maximizes the net benefits of the regulatory action to society, NHTSA did not propose to select the “maximum net benefits” scenario as its preferred alternative for the standards in the proposed rule. Instead, NHTSA proposed to select a scenario in which CAFE standards increase at an average rate of 4.3 percent per year. According to NHTSA officials, that decision was justified because the four statutory factors that they must weigh when setting CAFE standards outweigh OMB guidance. Several experts we spoke to said that NHTSA’s decision was justified because selecting the “maximum net benefits” scenario would have resulted in CAFE standards that automobile manufacturers could not realistically meet without making significant tradeoffs. For instance, one expert thought manufacturers would have to change their fleet mix to build and sell smaller vehicles and would have to pass on substantial costs to consumers, which could reduce vehicle sales. In addition, another expert thought that if lead time is not sufficient, manufacturers will not be able to hire staff quickly enough to handle the additional work. Additionally, as provided for in OMB guidance, NHTSA expanded its use of two types of uncertainty analysis, which differs from previous rulemakings. Specifically, relative to previous rulemakings, NHTSA expanded its sensitivity testing and probabilistic uncertainty analyses, both of which assess the uncertainty associated with key assumptions and inputs in its analysis, in comparison to previous rulemakings. NHTSA’s sensitivity analysis and probabilistic uncertainty analysis test whether variability in the values of key model inputs would dramatically affect the costs and benefits of a potential CAFE level. The variability of key inputs may arise from different estimates of credible studies or simply be the result of limited current knowledge. These sensitivity and uncertainty analyses provide decision makers with a sense of which potential CAFE level, despite the variability of key inputs, will best balance benefits and costs. In comparison to the model year 2008 through 2011 light truck rule, NHTSA’s current sensitivity and probabilistic uncertainty analyses considered more case scenarios focusing on a number of critical inputs, including projections of fuel prices, the rebound effect, the value of reducing carbon dioxide emissions, and the military security benefits of reducing fuel consumption, of which variability in one input or a combination of inputs may affect the results of the overall analysis. As part of this work, we spoke with a number of experts familiar with the Volpe model about their assessment of the data used in the model. Although they provided criticism, they did not agree on what needed to be improved (see app. I for information on experts with whom we consulted). In general, nearly all of the experts we spoke to offered some critique of the model and its data. For instance, some, but not all, experts said that NHTSA was too cautious in updating the values for variables such as the social cost of carbon dioxide emissions, given the state of current research. These experts said that NHTSA was underestimating the social cost of carbon dioxide emissions, which would lead to an underestimation of the benefits of CAFE standards and the establishment of standards set at a lower than ideal level. However, we could not find general consensus among experts we spoke to that NHTSA should have modified values for specific variables or made other improvements to the model. For example, NHTSA used a lower value for the rebound effect (10 percent) to more closely align with values identified in recent research. Several experts thought that NHTSA should have adopted the value (5 percent) identified in the research, which was even lower than what NHTSA used, while others thought that NHTSA’s more cautious approach was appropriate until additional studies using different data sets verified the findings. We did find considerable controversy among experts over the potential safety impact of weight reduction in vehicles—much more so than for other variables assessed in the Volpe model. While some experts stated that manufacturers could safely reduce vehicle weight while maintaining the size of the vehicle by substituting lightweight but durable materials for heavier materials (material substitution), other experts maintained that any effort to reduce vehicle weight would adversely affect safety. Two studies, one developed by NHTSA (Kahane study) and a second conducted by an automotive engineering consulting firm (Dynamic Research, Inc., study), came to different conclusions on this issue, and to date, no subsequent study has been conducted in a manner designed to resolve the conflict. DOE has sponsored research through the Lawrence Berkeley National Laboratory that examines the relationship between vehicle weight and driver casualty risk using police-reported crash data and CAFE compliance records, but given the high level of ongoing controversy, this approach may not satisfy all the experts invested in this issue. In addition, neither the Kahane study nor the Dynamic Research, Inc., study were able to assess directly how material substitution as a particular approach to weight reduction could affect safety because the vehicles analyzed in the two studies were limited to model years 1985 through 1999. During this period, CAFE standards were not attribute- based, and manufacturers had a greater incentive to improve fuel economy by reducing vehicle size rather than by reducing vehicle weight through material substitution. In addition, several experts noted that by using the Kahane study in its current work, NHTSA may be overestimating the safety implications of higher CAFE standards because the study does not consider technology solutions like material substitution as an option that could improve fuel economy without negatively affecting safety. Because NHTSA accounts for the safety effects of proposed standards by estimating their safety implications, relying on this research in the future could result in standards being set at a lower level. In the past, concerns about safety have prevented non-attribute-based CAFE standards from being increased. We also learned from experts that vehicle safety is challenging to address because the safety tradeoff between larger, heavier vehicles and smaller, lighter vehicles does not lend itself to a clear policy solution. Generally, larger and heavier vehicles, which enhance the safety of their passengers as a result of their size and weight, pose a greater safety threat to other vehicles on the roadways than smaller, lighter cars do. Conversely, although smaller, lighter cars pose less of a threat to other vehicles on the road, they cannot provide the same degree of safety to their passengers that larger, heavier vehicles do. The degree of difference in the size and weight of vehicles has some bearing on passenger safety: larger, heavier vehicles provide their passengers safety benefits and impose on others safety costs, while smaller, lighter vehicles provide others safety benefits and impose on their passengers safety costs. Several experts with whom we spoke thought that additional research was needed to better understand the relationship between vehicle size, weight, and safety, as well as to identify how best to reduce the weight of vehicles in a manner that creates the least risk. Experts recommended several different methodological approaches to assess this relationship, including future studies that examined material substitution in accident outcomes once vehicles with this technology became more prevalent in the fleet. Others recommended the use of computer crash simulation modeling to identify best practices in the use of material substitution. Federal agencies can use retrospective analyses of rulemakings to help determine the extent to which the expected costs, benefits, and goals of a regulation are being realized. A retrospective analysis of CAFE standards could help NHTSA and Congress determine the extent to which goals of the standards—such as improvements in fuel economy—are being met and provide insight into ways to improve the standards. In addition, a retrospective analysis of key data inputs could help determine if there are systematic issues with the estimation of those data and identify means to improve the data in the future. EPA officials noted that they have used retrospective analyses of other regulatory programs to assess the accuracy of program costs. For example, in 2002, EPA issued a retrospective cost analysis of a large number of light-duty vehicle criteria pollutant standards and mobile source fuel standards implemented between 1992 and 2001. However, because EPA has not previously issued GHG emissions standards for automobiles, it would not be able to conduct these types of analyses for GHG emissions standards at this time. With respect to the model year 2008 through 2011 light truck CAFE standards, the following retrospective analyses could be conducted by NHTSA: An overall analysis of the standards to determine the extent to which the new, footprint-based standards met intended goals (e.g., increases in fuel economy and reductions in fuel consumption). As the proposed model year 2012 through 2016 CAFE standards are also to be based on vehicle footprint, this analysis could help determine if the move to the footprint based standard provided the intended benefits or imposed unexpected costs. An analysis of the accuracy of key data inputs, including the baseline fleet and technology cost estimates. NHTSA has been criticized in the past for not adequately estimating these two sets of data, which provide crucial information for determining the effects of the proposed standards, and thus need to be as accurate as possible. Although NHTSA officials we spoke with recognize the value of these analyses and hope to conduct them, they report that resource limitations have prevented them from doing so in the past and will prevent them from doing so in the near future. In addition, NHTSA is not required to do any of these analyses. A discussion of NHTSA officials’ responses regarding retrospective analyses and the resource limitations that have prevented them from being conducted follows: Model year 2008 through 2011 light truck standards. NHTSA staff said that such retrospective analysis of the model year 2008 through 2011 light truck standards would be worthwhile and informative. However, according to NHTSA officials, in recent months the agency has devoted all of its dedicated CAFE staff’s time to the proposed model year 2012 through 2016 CAFE rule and, as a result, has not been able to devote resources to conducting a retrospective analysis. In addition, given that NHTSA staff said that the agency is being asked by a majority of commenters addressing the subject to begin working on CAFE standards beyond model year 2016 as soon as possible, they may not be able to work on a retrospective analysis once the model year 2012 through 2016 standards are finalized and released. However, a number of experts we interviewed said that NHTSA should conduct such an analysis in order to provide insight into the standards and their actual effects. Manufacturers’ sales data. While NHTSA told us that it would like to look back at manufacturers’ actual sales as a means to assess the accuracy of the product plans that manufacturers submitted and that NHTSA used as the baseline fleet in setting model year 2008 through 2011 light truck standards, it said that it has no definitive plans for conducting this analysis in the near future. NHTSA officials cited a lack of resources in the agency for not conducting such an analysis. In addition, because 2008 sales were an anomaly—they were unusually low given the economic downturn— officials thought a study of the extent to which actual 2008 sales were in- line with the forecasted sales for 2008 that were used to set those standards would be of little value. However, an analysis of actual future years’ sales against the estimated sales of the baseline fleet used in the rulemaking would be of value, as it would help validate data and potentially identify means to improve fleet forecasts in future CAFE rulemakings. Cost estimates of technology. NHTSA officials also told us that an assessment of the cost estimates of technology used in previous analyses would be valuable. However, NHTSA staff also said that such an analysis would be challenging, as it is hard to get accurate data on the actual cost of technology components. This is because these components are either sold directly to, or produced by, automobile manufacturers, meaning that there is no clear, public historical data on their sales price. However, while some experts with whom we spoke recognized the challenges in conducting such an analysis, they thought that such an assessment would provide value and recommended several different approaches for conducting this type of analysis. For example, some experts suggested that costs could be validated through a vehicle teardown program, such as the type of project EPA initiated last year, or through an analysis of sales data and technology that manufacturers incorporated into recent models to comply with increased standards. While these studies could potentially impose large resource demands, they would also potentially help improve the cost of technology assumptions in future CAFE rules, helping to create standards that more accurately reflect costs and benefits. Because CAFE and GHG emissions standards are closely related and automobile manufacturers will be subject to both, close collaboration between NHTSA and EPA can minimize compliance costs to the industry and ensure harmonized standards. Furthermore, regardless of how the government may set any future standards—jointly or independently—a continued partnership between the two agencies can help assure fiscal responsibility by leveraging—rather than duplicating—federal efforts and resources, including expertise and human capital costs. However, the current level of collaboration between NHTSA and EPA, which stems from the joint rulemaking process the agencies undertook at the discretion of the current administration, is not set in law or otherwise required. If NHTSA and EPA do not collaborate closely on future standards, there is a risk that the standards may not be harmonized, which would lead to increased compliance costs for manufacturers; the standards may not reflect the expertise of both agencies, such as the vehicle power train technology and environmental expertise of EPA and vehicle safety expertise of NHTSA; and the goals that the standards are attempting to accomplish may not be met. Also, the standards may not accurately reflect the best estimates of key costs and benefits, thus imposing added costs on the economy or failing to provide as large benefits to society as the standards could. In addition, this is the first joint rulemaking conducted between these agencies, and NHTSA and EPA are under tight time frames to set the standards. However, the agencies are not documenting the processes being used. If NHTSA and EPA must collaborate on future standards, staff may spend additional time recreating these processes—ones which appear to be working effectively—and relearning how best to interface with one another’s leadership structure, management processes, and research activities. As a result, the two agencies may not share their respective expertise and resources as well, potentially leading to inefficiencies, less thorough and rigorous regulatory analyses, and standards that may not be effectively harmonized or developed with similar time frames. NHTSA has not yet conducted—nor does it have plans to conduct—a full and formal analysis of the effectiveness and outcomes of its adoption of the footprint-based CAFE standards for light trucks. Also, it has no plans to assess the accuracy of key data inputs used to set these standards, even though it is now proposing a footprint-based approach for passenger vehicles as well. Conducting these types of analyses can help policymakers determine whether anticipated benefits and costs have been realized and identify corrections in or improvements to existing programs. NHTSA is not required to conduct such analyses and has limited staff and resources to devote to this effort. As a result, it is not clear if the new standards have met goals that NHTSA intended—such as fuel savings and improved safety outcomes—and if the move to the footprint-based standards was worthwhile. Furthermore, NHTSA does not know how well it estimated key data inputs that help determine the level at which standards are set, including technology costs; whether manufacturers used the types of technologies NHTSA expected in order to comply with new standards; and whether baseline fleets matched the vehicle mix actually sold. Consequently, agency officials cannot learn from the past and make adjustments to the process, such as seeking different data sources, to ensure that future standards are based on the most accurate data available. Given the importance of safety in setting CAFE standards, ensuring that decision makers and the public have the most accurate information on the relationship between vehicle size, weight, and safety will be important if the standards are to be changed in the future. In addition, the data inputs that NHTSA and EPA use to help set and analyze the effects of the proposed model year 2012 through 2016 standards should be based upon the best available research and reflect a consensus among experts and stakeholders. Given the controversy among experts and the increasing availability of material substitution—an advancement in technology to reduce weight that could compensate for safety effects—new research could help to answer questions regarding the extent to which weight can be reduced without affecting safety and whether there are best practices for employing material substitution. Finally, while other sources of technology costs were used in developing CAFE and GHG emissions standards, the 2002 NAS work on technology costs was generally viewed by a wide range of stakeholders and experts as being thorough and unbiased. Congress authorized NHTSA to contract with NAS at 5-year intervals until 2025 so that the agency would have current information available to set future standards. However, if NHTSA cannot ensure that this work is available in time to support analysis in future rulemaking, this study, and the federal money that sponsored it, will be wasted. Based on our review, we are making five recommendations. We recommend the following to NHTSA and EPA: NHTSA and EPA should document the process used in this joint rulemaking to establish a roadmap for any future rulemaking efforts and facilitate future collaboration. In addition, NHTSA and EPA should publish this documentation in order to increase transparency. To ensure continued collaboration and an enhanced relationship in any future CAFE and GHG emissions rulemakings, NHTSA and EPA should enter into a Memorandum of Understanding with one another in which the agencies agree to continue their enhanced partnership in any future CAFE and GHG rulemakings. NHTSA and EPA, with input from key stakeholders, should conduct or sponsor new research on safety and its relationship to vehicle size and weight, given the controversy and lack of consensus regarding the relationship between vehicle size, weight, and safety and the emergence of new strong-but-lightweight materials among experts and stakeholders. In addition, we are recommending the following to NHTSA: NHTSA should conduct and document a retrospective analysis of the model year 2008 through 2011 light truck standards, given the potential impact of CAFE standards on the automobile industry and consumers. In addition, we recommend that NHTSA identify opportunities to evaluate the accuracy of key estimates, such as technology costs, used to determine the model year 2008 through 2011 light truck standards. As EPA has experience conducting retrospective analyses of regulatory programs, NHTSA should consider involving EPA in this process. NHTSA should set delivery time frames for future NAS studies to ensure the availability of these studies in a time frame useful for incorporation in NHTSA’s regulatory analyses. We provided a draft copy of this report to the Department of Transportation and the Environmental Protection Agency for their review. We also provided a relevant section of the report to the Energy Information Administration, and officials confirmed that information characterizing EIA’s fuel price projections was accurate. EPA provided a written response, which is reproduced in appendix III. In its response, EPA agreed with our characterization of NHTSA and EPA’s collaboration on setting CAFE and GHG emissions standards and with our recommendations. In addition, EPA provided technical comments via e- mail which we incorporated as appropriate. DOT provided its response by e-mail and generally agreed with the report’s recommendations. NHTSA also provided technical comments, and while we incorporated a number of these comments, others offer an opportunity for additional discussion. First, NHTSA suggested that our first two recommendations—(1) that NHTSA and EPA document the process used in this joint rulemaking, and (2) that NHTSA and EPA sign a Memorandum of Understanding to continue this enhanced partnership—apply only if future rulemakings are conducted jointly. We did not make this change. Given NHTSA and EPA’s successful collaboration on CAFE and GHG emissions standards, we believe continued collaboration will help ensure that federal resources and expertise are leveraged efficiently and effectively—regardless of whether future administrations continue to issue both sets of standards jointly, separately, or pursue only CAFE or GHG emissions standards. Second, in our discussion of the impact of the appropriations ban from fiscal years 1996 through 2001 that prevented NHTSA from conducting work on CAFE issues, we noted that NTHSA lost staff with relevant expertise and did not begin to hire additional automotive engineers until summer 2009. We looked into this issue because in our 2007 report. NHTSA officials told us they needed additional staff with expertise in automotive engineering and computer modeling to assist in developing technology cost and effectiveness estimates, as well as other tasks, to prepare for future changes in CAFE standards. NHTSA commented in response to this draft that the prohibition did not prevent DOT from sustaining relevant engineering, energy, and environmental expertise, and that after 2001, NHTSA leveraged DOT’s expertise. NHTSA also commented that in our current review, we did not examine broader staff capabilities within DOT. We agree that this information is important. However, we were not able to confirm the extent to which NHTSA leveraged DOT’s expertise because NHTSA did not provide this information. We continue to believe that NHTSA and EPA have different expertise and resources—ones that likely cannot be replicated efficiently at both agencies but that are crucial for the development of balanced, effective standards for cars and light trucks, and therefore we did not revise the report. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Transportation, the Administrator of the Environmental Protection Agency, the Administrator of the Energy Information Administration, and interested congressional committees. This report will also be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions concerning this report, please contact me at (202) 512-2834 or flemings@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions are listed in appendix IV. To describe the proposed corporate average fuel economy (CAFE) and greenhouse gas (GHG) emissions standards, we analyzed documentation related to the rulemaking, such as the May 2009 Notice of Upcoming Joint Rulemaking, September 2009 Notice of Proposed Rulemaking, and associated preliminary regulatory impact analyses from both agencies. We analyzed these documents to summarize the structure of each set of standards, describing how the National Highway Traffic Safety Administration (NHTSA) and the Environmental Protection Agency (EPA) harmonized the standards and areas in which there are differences between the standards, such as certain types of flexibilities like temporary lead-time mechanisms. We also summarized related legislation that establishes CAFE fines and summarized EPA’s authority under the Clean Air Act to assess fines for noncompliance with GHG standards, to describe the penalties that NHTSA and EPA will apply for noncompliance with the new standards. To describe NHTSA’s and EPA’s processes for setting proposed model year 2012 through 2016 CAFE and GHG emissions standards, we reviewed and analyzed relevant rulemaking documents, such as the Notice of Proposed Rulemaking and the legislation establishing CAFE standards and EPA’s authority to regulate GHG emissions, noting the types of analyses each agency was allowed to conduct under its individual legal authority. We analyzed documentation related to the analyses the agencies conducted. We also interviewed agency officials and reviewed documentation from NHTSA and EPA related to the work they conducted in setting the standards. To describe how the agencies collaborated with one another to issue the standards, we analyzed these interviews and documentation against GAO criteria for evaluating communication and coordination among federal agencies. Through interviews with officials and by reviewing research each agency developed as part of the rulemaking, we identified the expertise and resources each agency brought to bear in the development of the standards. To evaluate the improvements made to NHTSA’s regulatory impact analyses used in setting CAFE standards, we reviewed relevant documentation, including NHTSA’s Preliminary Regulatory Impact Analysis on model year 2011 CAFE standards for passenger cars and light trucks and for the proposed model year 2012 through 2016 standards. We also conducted literature searches for research on fuel economy published since 2007—the year of our last report on CAFE standards. We interviewed NHTSA officials and staff at the Volpe National Transportation Systems Center, as well as automobile industry stakeholders—including domestic and international automobile manufacturers; an association representing original equipment suppliers; vehicle technology specialists at national laboratories and academic research centers; and independent experts on vehicle technology, transportation, and modeling. We identified these experts through several approaches: About half of the experts we contacted had assisted us in our 2007 review of CAFE standards. Several of these experts were members of the current or 2002 National Academy of Sciences (NAS) committee, while others had been recommended by members of the NAS committee or NHTSA. We conducted internet searches to identify experts publishing recent research on fuel economy, GHG emissions, economic modeling, and other issues. We asked experts participating in our work for recommendations. We also pursued a more in-depth analysis from stakeholders about safety and vehicle weight by reviewing the methodology of several key studies and interviewing engineers and other organizations with specific expertise in safety and vehicle design, such as the Insurance Institute for Highway Safety and experts from National Laboratories. We also interviewed officials from the Energy Information Administration (EIA) to review gasoline price projections that are used in the Volpe model. To evaluate NHTSA’s processes for obtaining and validating data on automobile manufacturer product plans and cost data on fuel-saving technologies, we analyzed NHTSA documentation against GAO criteria for developing, managing, and evaluating cost estimates and for assessing data reliability. To evaluate NHTSA’s processes for estimating the costs and benefits of improved vehicle fuel economy in the Volpe model, we analyzed NHTSA documentation against federal guidance for conducting regulatory and economic analyses and GAO guidance for conducting benefit-cost analyses. To determine the steps NHTSA has taken to analyze the effects of the model year 2008 through 2011 light truck standards, we reviewed and analyzed the Energy Independence and Security Act, NHTSA’s final rulemaking on the model year 2008 through 2011 CAFE standards for light trucks, and the data used to set these standards. We interviewed NHTSA officials to determine whether NHTSA has conducted analyses to assess the outcomes of these standards—for example, improvements in vehicle fuel economy and gallons of oil saved—and requested documentation of any analyses. To determine the steps NHTSA has taken to assess the accuracy of input data and assumptions used in developing the model year 2008 through 2011 CAFE standards—particularly assumptions related to cost estimates of technology and manufacturer product plans—we interviewed NHTSA officials and requested documentation of any analyses as appropriate. For example, we assessed whether NHTSA compared data that estimated the costs of fuel-saving technology to actual cost data from 2008. We also interviewed outside experts on options NHTSA could use to conduct such an analysis and the benefits and tradeoffs of doing so. Finally, we reviewed and analyzed these interviews and documentation against GAO guidance for program evaluation. We conducted this performance audit from June 2009 to February 2010, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. As part of its regulatory impact analysis of potential CAFE standards, NHTSA uses the CAFE Compliance and Effects Modeling System (commonly known as the Volpe model) developed by the Volpe National Transportation Systems Center to estimate the following: (1) the most cost-effective strategy for automobile manufacturers to respond to proposed CAFE standards and (2) the impacts, such as reduced fuel consumption, increased vehicle prices, and reduced emissions, proposed CAFE standards will have on consumers, manufacturers, and society. For a visual description of the Volpe model’s analysis, see figure 6. The Volpe model’s analysis relies on a number of data inputs, including, among other things, a list of the automobile manufacturers producing vehicles for sale in the U.S. during the period covered by a CAFE rulemaking, a list of fuel-saving technologies and their estimated cost and effectiveness in reducing fuel consumption, simulated alternative CAFE scenarios (i.e., CAFE standards set at range of levels), economic inputs such as the estimated social cost of carbon dioxide emissions and the rebound effect (a phenomenon in which individuals drive more because improving a vehicle’s fuel economy effectively lowers the cost per mile of operating that vehicle), and the emissions rates of various pollutants. These data are contained in several input files that are entered into the Volpe model. The Volpe model’s compliance simulation demonstrates how each automobile manufacturer could attempt to comply with a higher CAFE standard by adding fuel-saving technologies to its vehicle fleet until that level is achieved. Using the information provided in the scenario input file, the Volpe model applies fuel-saving technologies in order of cost- effectiveness and ease of implementation to the vehicle models forecasted in the baseline to simulate how a manufacturer could make progress toward compliance with new CAFE standards. In previous rules, NHTSA has relied on confidential product plans provided by manufacturers to create the baseline fleet, but it has shifted away from that approach to make the baseline data more transparent for the proposed rule. determines the applicability and availability of each technology to every vehicle model. If the phase-in limit for a particular technology has been reached and it is no longer available, the Volpe model proceeds to the estimated next-best technology. See figure 7 for a visual description of the process by which the Volpe model determines the applicability and availability of a given technology. The model repeats this process for each technology group, and then selects the technology with the lowest effective cost—that is, the technology that provides the greatest private benefits with the lowest cost. The compliance simulation continues to apply technologies to each manufacturer’s fleet using this approach until (1) the manufacturer’s fleet is estimated to be brought into compliance with the CAFE standard for a given model year, (2) the manufacturer has exhausted all the technology options for its fleet, or (3) the Volpe model estimates that it would be more cost-effective for the manufacturer to pay the associated CAFE fines than to apply additional technology to its fleet. The Volpe model accounts for multiyear planning, through which a manufacturer may apply more technology than necessary in earlier model years in order to carry those technologies forward into future model years and thereby avoid applying other more expensive technologies. When the Volpe model has brought each manufacturer’s fleet to one of the three outcomes listed above, the compliance simulation loop ends. The compliance simulation produces an output file that shows, for each vehicle in a manufacturer’s fleet, which technologies were included in a vehicle model before the simulation was run, which technologies were skipped in favor of other technologies, and which technologies had been applied to vehicles at the simulation’s end. The output file also shows the changes in vehicle weight, improvement in fuel economy, and incurred cost resulting from the technologies applied during the compliance simulation, as well as the total cost of any civil penalties incurred by each manufacturer. At this point, the Volpe model has a new fleet of vehicles with new prices, fuel types (gasoline or diesel), fuel-economy values, and curb weights to reflect how NHTSA estimates manufacturers will apply fuel-saving technologies in response to the CAFE requirements. Following the compliance simulation, the Volpe model’s calculation of effects component estimates the impact of the fuel-economy improvements made to vehicles to meet new CAFE standards on energy consumption, greenhouse emissions, and other factors. Using the forecasted vehicle fleet (i.e., the output of the compliance simulation), the Volpe model estimates the lifetime travel, fuel consumption, and carbon dioxide and criteria pollutant emissions resulting from the application of technologies to meet higher CAFE standards for each vehicle in the U.S. fleet over its anticipated life span. After calculating the effects for individual vehicle models, the Volpe model aggregates these effects for all the vehicles in a CAFE class produced during each model year affected by a proposed standard. The Volpe model measures the effects of increased CAFE standards by calculating the difference in the value of a variable (e.g., gallons of fuel consumed) under the baseline (model year 2011) CAFE standard and its value under a new CAFE standard. These effects include but are not limited to reductions in greenhouse gas emissions—increasing CAFE standards will reduce gasoline consumption and the amount of petroleum refined, which will reduce emissions of greenhouse gases; higher or lower emissions of air pollutants; potential increases in new vehicle prices; social value of fuel savings, which is the annual value of fuel savings over the entire expected lifetimes of vehicle models whose fuel economy is improved; economic benefits from reduced petroleum imports; valuing changes in environmental impacts (i.e., the Volpe model estimates changes in damage costs caused by carbon dioxide emissions); and social costs of added driving. In addition to the contact above, Cathy Colwell (Assistant Director), Timothy Bober, Antoinette Capaccio, Joah Iannotta, Terence Lam, Sara Ann Moessbauer, Josh Ormond, Madhav Panwar, Justin Reed, Matthew Rosenberg, Amy Rosewarne, Frank Rusco, Crystal Wesco, and Chad Williams made key contributions to this report. | In May 2009, the U.S. administration announced plans to increase the Department of Transportation's (DOT) National Highway Traffic Safety Administration's (NHTSA) corporate average fuel economy (CAFE) standards and establish the Environmental Protection Agency's (EPA) greenhouse gas (GHG) emissions standards for vehicles. NHTSA redesigned CAFE standards for light trucks for model years 2008 through 2011, and some experts raised questions about the rigor of the computer modeling NHTSA used to develop these standards. GAO was asked to review (1) the design of NHTSA and EPA's proposed standards; (2) how they are collaborating to set these standards; (3) improvements compared to a previous rulemaking, if any, NHTSA made to the modeling; and (4) the extent to which NHTSA analyzed the effects of past light truck standards and the accuracy of data used to set them. GAO reviewed relevant rulemaking and modeling documents, and interviewed agency officials and other experts. NHTSA and EPA have worked to propose CAFE and GHG standards that are generally aligned so manufacturers can build a single fleet of vehicles to comply with both. The standards are based on vehicle size and will cover model years 2012 to 2016. However, differences between the standards still exist because of variation in the legal authorities of each agency. For example, certain flexibility mechanisms designed to reduce compliance costs for manufacturers apply only to GHG standards, which could make aligning them with CAFE standards more difficult. However, potentially stricter penalties for GHG standard noncompliance could improve compliance with CAFE standards. Also, while NHTSA and EPA expect benefits from adopting a standard based on vehicle size, neither standard has a mechanism to ensure that a specific national target will be met. NHTSA and EPA are collaborating by sharing resources and expertise to jointly set CAFE and GHG standards. From fiscal years 1996 through 2001, NHTSA was barred from using appropriated funds to raise CAFE standards. In contrast, EPA has continually expanded its automotive engineering expertise, including at its vehicle testing lab. As a result, EPA was able to contribute several original research studies to the proposed joint standards. Because this collaboration is not formally required and the agencies are not documenting the processes used--a recognized best practice--they may not be able to replicate them in the future. To set the proposed standards, NHTSA improved upon the computer model compared to the version used that had been used to set the CAFE standards for 2008 through 2011 light trucks. One improvement was that NHTSA increased the model's transparency by using publicly available, rather than confidential, data to develop a baseline fleet of vehicles. With EPA's input, NHTSA updated several data inputs such as technology costs and the cost of emissions. While experts GAO interviewed had varying critiques of NHTSA's model, there was no consensus on how NHTSA could further improve it. In particular, experts' opinions differed sharply on two studies, which reported opposing findings concerning the relationship between vehicle weight (a key factor in determining fuel consumption) and safety--suggesting that additional research may be warranted. In part due to resource and data constraints, NHTSA has not yet evaluated its 2008 through 2011 light truck CAFE standards, which have a similar design to the new standards. Retrospective analyses of efforts and data inputs could inform NHTSA on the extent to which the standards met goals and provide means to improve the process of setting standards. Lacking such analysis, NHTSA does not know whether goals of the standards have been met or if changes are needed to the program. NHTSA officials said that while they would like to conduct such analyses, limited resources and time have prevented them from doing so, and they have no definitive plans to conduct them in the future. |
The U.S. General Accounting Office exists to support the Congress in meeting its constitutional responsibilities and to help improve the performance and assure the accountability of the federal government for the benefit of the American people. Given GAO’s role as a key provider of information and analyses to the Congress, maintaining the right mix of technical knowledge and subject matter expertise as well as general analytical skills is vital to achieving the agency’s mission. GAO spends about 80 percent of its resources on its people. And yet, like other federal agencies, GAO has faced significant human capital challenges—challenges that if not effectively addressed, could impair the timeliness and quality of its work for its congressional clients and the American people they represent. A number of these challenges were created by the significant reduction in the size of GAO undertaken in the mid-1990s. Specifically, from 1992 through 1997, GAO underwent budgetary cuts totaling 33 percent in constant fiscal 1992 dollars. To achieve those budgetary reductions while meeting other agency needs, GAO reduced the number of its employees 39 percent through extensive field office closings and targeted reductions in headquarters staff. To conform to the reduced budgetary ceiling, GAO then instituted a virtual hiring freeze at the entry level, cut training for all staff, suspended agencywide incentive programs, and at times used mid-level promotions as a retention strategy. Because of the reduction in hiring, the average age of the agency’s workforce increased, and the retirement eligibility of staff accelerated. GAO’s analyses showed that by the end of fiscal 2004, about 34 percent of all GAO employees would be eligible to retire. For upper-level staff, the proportion eligible to retire was even larger— 48 percent of all band III management-level employees and 55 percent of all Senior Executive Service members. Thus, as at many federal agencies, GAO’s human capital profile reflected a workforce that was smaller, closer to retirement, and at increasingly higher-grade levels. In addition to the succession-related concerns raised by such a profile, GAO also faced a range of skills gaps. As major policy issues have become more complex and as technology has radically altered the way the federal government conducts business, the types of skills and knowledge needed by GAO staff have been evolving, and the need for sophisticated technical skills has been increasing. Early in his tenure, Comptroller General David Walker recognized that GAO’s human capital profile and selected skills gaps presented serious challenges to GAO’s future ability to serve the Congress. Comptroller General Walker also sought to have GAO become a model federal agency and a world-class professional services organization that focuses on delivering positive results for the Congress and the country. The agency’s ability to operate in an efficient, effective, and economical manner and meet the ever-changing and increasingly complex needs of the Congress could be seriously compromised if GAO’s human capital challenges were not effectively addressed. As a first step in addressing these concerns, GAO used its internal administrative authority to implement measures to improve the alignment of its human capital with the agency’s overall strategic goals and objectives as contained in GAO’s Strategic Plan. Subsequent to developing its first strategic plan, GAO undertook a number of major human capital initiatives, including an agencywide realignment and reorganization, an overall human capital self-assessment, the revitalization of its recruiting and college relations programs, an agencywide knowledge and skills inventory, the development of competency-based performance appraisal systems, the establishment of an Employee Advisory Council, the enhancement of GAO’s employee benefit programs, a comprehensive employee feedback survey, a workforce-planning process, and the establishment of a professional development program for entry-level analysts. In addition to these initiatives, GAO’s leadership recognized that additional steps to reshape the agency’s workforce were necessary and that preexisting personnel authorities did not allow the agency to address these challenges effectively. Therefore, GAO sought legislation establishing narrowly tailored flexibilities that would help to reshape the agency’s workforce and recruit and retain staff with needed technical skills. Based on a sound business case, Public Law 106-303—known as the GAO Personnel Flexibilities Act—became law in October 2000. The act authorized the Comptroller General to implement the following personnel flexibilities: 1. Offer voluntary early retirement to realign the workforce to meet budgetary constraints or mission needs; correct skill imbalances; or reduce high-grade, managerial, or supervisory positions. 2. Offer separation incentive payments to realign the workforce to meet budgetary constraints or mission needs; correct skill imbalances; or reduce high-grade, supervisory, or managerial positions. 3. Establish modified regulations for the separation of employees during a reduction or other adjustment in force. 4. Establish senior-level scientific, technical, and professional positions and provide those positions with the same pay and benefits applicable to the Senior Executive Service while remaining within GAO’s current allocation of super-grade positions. After the Congress passed the act in 2000 and the President signed it into law, GAO began the process of developing regulations to implement the four authorities it established. Because stakeholder involvement is a critical component of successful human capital management, particularly when initiatives are being introduced, GAO established a standard practice to ensure employee involvement in significant agency initiatives. GAO’s standard practice involves the initial discussion and presentation of draft proposals to members of GAO’s Employee Advisory Council—a panel of employees representing a variety of employee constituent groups—and also to the agency’s senior executives. The Comptroller General was personally involved in the vast majority of those exchanges, which afforded an opportunity for the direct communication of employees’ and managers’ reactions. After the views of employees and managers were considered, further changes were made, if needed, before the draft proposal was issued to all employees for their review and consideration. Materials were posted on GAO’s intranet home page, and employees were notified by E-mail that proposals were available for their review, comments, and suggestions for a period of 30 days. The documents were posted in a user-friendly format that allowed employees to access the documents and provide comments directly on any or all of the provisions. Generally, the regulations were accompanied by “Frequently Asked Questions,” which elaborated on and explained the details of the provisions. The agency received 60 comments on the voluntary early retirement order, 33 on the workforce restructuring order, and 12 on the senior-level order. These comments were collected, reviewed, and carefully considered by GAO’s Executive Committee prior to finalizing the regulations. The approaches that GAO took in implementing these four flexibilities as well as the results that the agency has achieved are described in the following four sections. At the time the GAO Personnel Flexibilities Act was passed, GAO’s workforce was sparse at the entry level and plentiful at the midlevel. The agency was concerned about its ability to support the Congress with experienced and knowledgeable staff, given the significant percentage of the agency’s senior managers and analysts reaching retirement eligibility and the relatively small number of entry-level employees who were in training to replace more senior staff. The use of the voluntary early retirement authority provided in section 1 of the act is one of the tools that the agency has used to confront this serious issue—one that is facing much of the federal community. The act allows the Comptroller General to offer voluntary early retirement to up to 10 percent of the workforce when necessary or appropriate to realign the workforce to address budgetary or mission constraints; correct skill imbalances; or reduce high-grade, supervisory, or managerial positions. This flexibility represents a proactive use of early retirement to shape the workforce to prevent or ameliorate future problems. GAO Order 2831.1, Voluntary Early Retirement, containing the agency’s final regulations, was issued in April 2001 and is included in appendix I. Under the regulations, each time the Comptroller General approves a voluntary early retirement opportunity, he establishes the categories of employees who are eligible to apply. These categories are based on the need to ensure that those employees who are eligible to request voluntary early retirement are those whose separations are consistent with one or more of the three reasons for which the Comptroller General may authorize early retirements. Pursuant to GAO’s regulations, these categories are defined in terms of one or more of the following criteria: organizational unit or subunit, occupational series, grade or band level, skill or knowledge requirements, performance appraisal average, geographic location, or other similar factors that the Comptroller General deems necessary and appropriate. Since it is essential that GAO retain employees with critical skills as well as its highest performers, certain categories of employees have been ineligible under the criteria. Some examples of ineligible categories are employees receiving retention allowances because of their unusually high or unique qualifications; economists, because of the difficulty that the agency has experienced in recruiting them; and staff in the information technology area. In addition, employees with performance appraisal averages above a specified level have not been eligible under the criteria. To give the fullest consideration to all interested employees, however, any employee may apply for consideration when an early retirement opportunity is announced, even if he or she does not meet the stated criteria. The Comptroller General may authorize early retirements for these applicants on the basis of the facts and circumstances of each case. The Comptroller General or his Executive Committee designee(s) considers each applicant and makes final decisions on the basis of the institutional needs of GAO. Only employees whose release is consistent with the law and GAO’s objective in allowing early retirement are authorized to retire early. In some cases, this has meant that employees’ requests must be denied. GAO held its first voluntary early retirement opportunity in July 2001. Employees who were approved for early retirement were required to separate in the first quarter of fiscal 2002. As required by the act, information on the fiscal 2002 early retirements was reported in an appendix to our 2002 Performance and Accountability Report. Another voluntary early retirement opportunity was authorized in fiscal 2003, and employees were required to separate by March 14, 2003. Table 1 provides the data on the number of employees separated by voluntary early retirement as of May 30, 2003. Of the 79 employees who separated from GAO through voluntary early retirement, 66, or 83.5 percent, were high-grade, supervisory, or managerial employees. High-grade, supervisory, or managerial employees are those who are in grade GS-13 or above, if covered by GAO’s General Schedule, in band II or above, if covered by GAO’s banded systems for Analysts and Attorneys or in any position in GAO’s Senior Executive Service or Senior Level system. GAO’s transformation effort is a work-in-progress and, for that reason, the agency supports additional legislation to make the voluntary early retirement provision in section 1 of Public Law 106-303 permanent. While the overall number of employees electing early retirement has been relatively small, GAO believes that careful use of voluntary early retirement has been an important tool in incrementally improving the agency’s overall human capital profile. Each separation has freed resources for another use, enabling GAO to fill an entry-level position or to fill a position that will reduce a skill gap or address other succession concerns. Importantly, these separations are accomplished voluntarily with the acquiescence of both the employee and the agency. Although GAO has made progress in improving its human capital profile, there is still work to do. GAO needs to retain its option to use this flexibility when necessary to address current and future concerns. In making this recommendation, GAO points to its progress in changing the overall shape of the organization. As illustrated in figure 1, by the end of fiscal year 2002, GAO had almost a 74 percent increase in the proportion of staff at the entry level (Band I) compared with fiscal year 1998. Also, the proportion of the agency’s workforce at the midlevel (Band II) decreased by about 16 percent. Since the beginning of fiscal 2001, a total of 447 employees have retired from GAO; 79 (or 17.6 percent) of those retirements are the result of GAO’s early retirement offerings, and as noted above, 83.5 percent of those retiring were high-grade, supervisory, or managerial employees. The loss of these higher-level staff, along with other employees whose skills are no longer essential to GAO has helped the agency address succession planning and skill imbalance issues, in addition to increasing the numbers of entry-level staff who can be hired. In addition to authorizing voluntary early retirement for GAO employees, the act permits the Comptroller General to offer voluntary separation incentive payments—also known as “buyouts”—when necessary or appropriate to realign the workforce to meet budgetary constraints or mission needs; correct skill imbalances; or reduce high-grade, supervisory, or managerial positions. Under the act, up to 5 percent of employees could be offered such an incentive, subject to criteria established by the Comptroller General. The act requires GAO to deposit into the U.S. Treasury an amount equivalent to 45 percent of the final annual basic salary of each employee to whom a buyout is paid. The deposit is in addition to the actual buyout amount, which can be up to $25,000 for an approved individual. Given the many demands on agency resources, these costs present a strong financial incentive to use the provision sparingly, if at all. GAO anticipates little, if any, use of this authority because of the associated costs. For this reason, as well as to avoid creating unrealistic employee expectations, GAO has not developed and issued agency regulations to implement this section of the act. GAO also supports legislation making section 2—-authorizing the payment of voluntary separation incentives—permanent. GAO notes that the Homeland Security Act of 2002 provides most federal agencies with buyout authority. Agencies with preexisting legislative authority to offer buyouts retain their authority, although they may be covered under the Homeland Security Act provision as well. Although GAO has not yet used its buyout authority and has no plans to do so in the foreseeable future, GAO recommends the retention of this flexibility and the elimination of the expiration date of December 31, 2003. The continuation of this provision maximizes the options available to the agency to deal with future circumstances. Since GAO is also eligible to request buyouts under the provisions of the Homeland Security Act, the agency will consider its options under this provision as well. Section 3 of the act allows the Comptroller General to prescribe regulations for the separation of GAO employees during a reduction in force or other adjustment in force consistent with those issued by the Office of Personnel Management under section 3502(a) of title 5. In the event that GAO is required to initiate involuntary job reductions, employees would compete for retention on the basis of the following factors in descending order of priority: tenure, veteran’s preference, performance ratings, and length of federal service. At the discretion of the Comptroller General, retention may also be based on other objective factors, including skills and knowledge in addition to the preceding criteria. After careful analysis and deliberation, GAO Order 2351.1, Workforce Restructuring Procedures for the General Accounting Office, containing final agency regulations, was issued in January 2003. Those regulations, which are included in appendix II, provide for establishing “zones of consideration,” which define the geographical and organizational boundaries within which employees compete for retention. All employees would be placed in “job groups” that comprise all positions within a zone of consideration that are at the same grade or band level and that perform the same duties and responsibilities. The highest priority would be placed on an employee’s tenure of employment and veteran’s preference. After consideration of those two factors, an employee would be ranked on the basis of his or her retention score. This score is based on the employee’s 3- year appraisal average and his or her length of creditable federal service; greater weight is placed on performance than on length of service. GAO has not taken any actions under its workforce restructuring regulations and is sensitive to concerns that were expressed prior to the passage of Public Law 106-303 about the potential impact of GAO’s modified reduction in force procedures on veterans. GAO is committed to protecting the rights of veterans in a manner consistent with title 5 and has retained all veterans’ protections in GAO orders. Therefore, GAO does not foresee any impact on veterans that would differ from those at any other agency involved in realigning or reducing their workforce. Section 3, authorizing the Comptroller General to develop modified regulations for the conduct of a reduction or other adjustment in force, is a permanent authority. The act requires GAO to provide any recommendations for changes to the section at this time. GAO is unable to offer recommendations, however, because the procedures have not yet been used. Circumstances leading to the decision to separate employees involuntarily are infrequent, and it may be years before the agency has any significant experience with the use of its procedures. Therefore, GAO has no recommendations for changes to section 3 at this time. To address a variety of complex issues, GAO needed to increase its skill base in such highly competitive hiring areas as economics, information technology, actuarial science, and evaluation methodology. Section 4 of the act permits GAO to establish senior-level positions to meet critical scientific, technical, or professional needs. To recruit and retain qualified individuals, the act allows GAO to extend the rights and benefits of Senior Executive Service employees to these positions. GAO Order 2319.1, containing the agency’s regulations for the employment of senior-level staff, was issued in March 2001 and is included in appendix III. GAO has used this authority to fill eight senior-level positions, including that of a Chief Accountant, Chief Economist, Chief Statistician, and Chief Actuary. In addition, three senior-level technologists have been appointed as well as a senior-level technologist with expertise in cryptography. The expertise of these senior experts in highly specialized areas of economics, technology, statistics, and cryptography has contributed significantly to GAO’s efforts to support the Congress. The accomplishments achieved with the expertise and contributions of the agency’s senior-level employees include work on biometric technologies for U.S. border security, anthrax irradiation of U.S. mail, and National Missile Defense systems. These experts have also contributed to ensuring that GAO’s work is based on the most technically accurate methodologies for conducting cost-benefit studies and for utilizing appropriate data sources. GAO has found this flexibility to be a critical component in its efforts to ensure that the agency maintains the skills and knowledge necessary to address the many highly complex areas of interest to the Congress. The act does not require recommendations from GAO on section 4, which permits the agency to establish senior-level positions to meet critical scientific, technical, or professional needs. When the act was under consideration, some GAO employees expressed their concerns about the legislation to their Congressional representatives. To ensure the active consideration of employees’ views, the act requires that this report include any assessments or recommendations of the GAO Personnel Appeals Board and of any interested groups or associations representing officers or employees of GAO. GAO also agreed to include in this report information about any impact upon employees’ attitudes and opinions, as measured by employee feedback survey responses. In response to these requirements, GAO’s Human Capital Officer sent a request soliciting recommendations for inclusion in this report to the Executive Director of the Personnel Appeals Board. The agency also alerted the members of GAO’s Employee Advisory Council by sending all members a memorandum notifying them of this provision. The topic was included on the agenda of the council’s quarterly meeting with the Comptroller General in March, and all members of the Employee Advisory Council were given a draft copy of the report for their review. GAO’s managing directors were also given a draft of this report for review. In addition, on May 21, 2003, a meeting of all GAO’s senior executives was held. At this meeting the Comptroller General solicited the views of senior staff on extending provisions of Public Law 106-303. The Personnel Appeals Board did not submit a specific assessment of the act’s implementation. However, in letter to GAO’s Human Capital Officer, dated May 15, 2003, Beth Don, Executive Director of the Personnel Appeals Board, stated that the board would be “willing to do a more comprehensive report in a year or so, at which point we think there will be more information available on the implementation and effectiveness of the personnel flexibilities granted under the Act.” Importantly, Ms. Don indicated that no cases had been filed with the Personnel Appeals Board concerning the matters covered by the act. She also stated that the board did not believe it was appropriate to comment, among other things, on the substantive nature of the regulations promulgated by GAO, or the manner in which the regulations were promulgated since the Board may be called upon to adjudicate matters relating to the act and its implementation. The Employee Advisory Council responded on May 13, 2002, that it did not have any comments on the report at this time. Using electronic polling technology that allows for confidential responses, GAO’s senior executives were asked if the voluntary early retirement and voluntary separation incentive authorities should be made permanent. All but one of the 110 respondents agreed or strongly agreed that GAO should seek legislation to make these provisions permanent. One respondent was undecided. As part of ongoing agency efforts to monitor progress in people measures, GAO conducted employee feedback surveys in 1999 and 2002—before and after the act’s passage. This survey asked employees for their agreement or disagreement with a variety of statements relating to their work life but was not designed to measure the impact of the act’s flexibilities on employee satisfaction. The 2000 survey elicited an 89 percent response rate, which was even better than the outstanding 87 percent achieved in 1999. On the basis of a comparison of responses to key questions in 2002 and 1999, employee satisfaction (as measured by the number of “strongly agree”/“agree” responses) was up in 50 of the 52 categories. Negative responses (as measured by the number of “strongly disagree”/“disagree” responses) also declined in 50 of 52 categories. GAO believes that the impact of the legislation on its employees has been positive. Clearly, the employees who requested and were approved for early retirement benefited from the act. Furthermore, the realignment of resources resulting from these retirements has had a positive impact on the remaining employees, as well. Ultimately, GAO’s efforts to improve the strategic management of GAO’s human capital, of which the legislation is a part, benefit all of GAO. Having the right people in the right places makes it easier for all GAO employees to be successful in accomplishing their part of the agency’s mission. In the final analysis, the agency’s efforts to maximize its value allow us to better serve the Congress and the American people. | Leading public organizations here and abroud have found that strategic human capital management must be the centerpiece of any serious change management initiative and effort to transform the culture of government agancies. GAO is not immune to these challenges facing the federal government. Over the past 3 years, however, we have made considerable progress toward addressing a number of our major human capital challenges through various initiatives. While many of the initiatives were administrative in nature, the additional flexibilities that the Congress authorized in Public Law 106-303 have helped to ensure that we have the right staff, with the right skills, in the right locations to better meet the needs of the Congress and the American people. |
In fiscal year 2012, VA provided prescription drug coverage to about 8.8 million eligible veterans primarily through its medical centers and Consolidated Mail Outpatient Pharmacy (CMOP).direct purchase approach to acquire drugs directly from manufacturers for distribution through its facilities. These purchases are usually made under contract with a prime vendor that provides the drugs at a fixed percentage discount off the lowest price otherwise available for each drug. VA’s drug prices are generally below wholesale prices provided to commercial buyers and do not include costs for storage, overhead, or dispensing. VA also has access to federal pricing arrangements and other discounts to help control drug spending, including the following: Federal Supply Schedule (FSS) prices: These prices are available to all direct federal purchasers and are intended to be no more than the prices manufacturers charge their most-favored nonfederal customers under comparable terms and conditions. Big Four prices: These prices are available to DOD, VA, the Public Health Service, and the U.S. Coast Guard. By law, these prices are 24 percent lower than nonfederal average manufacturer prices. VA national contracts: These contracts provide additional pricing concessions in return for commitment to potential vendors, resulting in pricing lower than FSS. The VA national contracts program is a separate contract vehicle from the FSS contract program. In fiscal year 2012, VA’s prescription drug spending totaled about $4.2 billion, according to VA officials. In fiscal year 2012, DOD provided prescription drug coverage to about 9.7 million active-duty and retired military personnel, their dependents, and others through its military treatment facilities (MTF), the TRICARE Mail Order Pharmacy (TMOP), and retail pharmacies. As with VA, DOD has access to FSS and Big Four prices and uses the direct purchase approach to buy drugs at a discount through a prime vendor for distribution through its MTFs and TMOP. Therefore, DOD’s direct purchase drug prices are also generally below wholesale prices provided to commercial buyers and do not include costs for storage, overhead, or dispensing. In fiscal year 2012, DOD’s prescription drug spending totaled about $7.6 billion, according to DOD officials. Both DOD and VA use prescription drug formularies to help control prescription drug costs. In our November 2012 report on DOD and VA health care, agency officials told us that some of the differences in the agencies’ formularies are due to differences in the structure of their health care systems. For example, DOD covers prescriptions written by both military and civilian providers, and DOD officials previously reported that, as a result, the department needs to have a broad formulary to account for differences in prescribing practices among different providers. In contrast, VA primarily covers medications for eligible beneficiaries through prescriptions written by its own providers. As VA officials reported, this allows VA to have more direct control over the medications that are prescribed to its patient population. Both agencies provide access to nonformulary medications determined by a physician to be clinically necessary. DOD paid a higher average unit price than VA across the entire sample of 83 drugs and for the subset of generic drugs, but paid a lower average price than VA for the subset of brand-name drugs. Specifically, DOD’s average unit price for the entire sample was 31.8 percent higher than VA’s average price, and DOD’s average unit price for the subset of 40 generic drugs was 66.6 percent higher than VA’s average price. However, VA’s average unit price for the subset of 43 brand-name drugs was 136.9 percent higher than DOD’s average price. (See fig. 1.) DOD paid an average of $0.11 per unit more than VA across the entire sample of 83 drugs and an average of $0.04 per unit more than VA for the generic drugs in our sample, while VA paid an average of $1.01 per unit more than DOD for the brand-name drugs in our sample. These results were consistent with each agency obtaining better prices on the type of drugs that made up the majority of its utilization: generic drugs accounted for the majority (83 percent) of VA’s utilization of drugs in the sample for the first quarter of 2012, and brand-name drugs accounted for the majority (54 percent) of DOD’s utilization of the sample drugs during the same period. DOD officials told us that in certain circumstances they are able to obtain competitive prices for brand-name drugs—even below the prices for generic equivalents—and therefore will often preferentially purchase brand-name drugs. When we examined the prices paid for the individual brand-name drugs in our sample, DOD paid higher average unit prices than VA for 23 of the 43 drugs (see fig. 2).remaining 20 brand-name drugs. DOD also paid a higher average price than VA for a majority of the generic drugs in our sample. Specifically, DOD paid a higher price for 32 of the 40 generic drugs in our sample, while VA paid a higher average price for the remaining 8 generic drugs. (See app. II for details on the relative prices paid by DOD and VA for the 83 individual drugs in our sample.) DOD and VA face continued challenges in controlling drug costs. Our findings suggest that there may be opportunities for savings with directly purchased drugs. DOD and VA paid different prices for the drugs in our sample; for 11 of the 83 drugs, one agency paid at least 100 percent more than the other agency. DOD paid a lower average price for the brand-name drugs in our sample while VA paid less, on average, for the generic drugs and across the entire sample. Our past reports highlight the importance of DOD and VA controlling drug costs. While the prescription drug market is complex and there are many factors affecting the prices DOD and VA are able to obtain for directly purchased drugs, differences in prices paid for specific drugs may provide insights into opportunities for each agency to obtain additional savings on at least some of the drugs they purchase. DOD and VA reviewed a draft of this report and provided written comments, which are reprinted in appendixes III and IV, respectively. DOD generally agreed with our methodology and findings. In addition, DOD noted that expressing differences between DOD and VA prices for the sample as percentages rather than actual dollar amounts may give the impression that significant dollar values are involved rather than a few cents or less per unit. While we clarified in the report the dollar amount of price differences, our findings indicated that small per-unit price differences may result in significant additional expenditures when accounting for the quantities purchased by the agencies. DOD also described additional factors beyond those mentioned in our findings that may contribute to differences in prices paid by DOD and VA. VA expressed concerns with the content of the report and suggested additional analyses. For example, VA suggested that analyses accounting for per-beneficiary costs, formulary design, utilization management, and the mix of drugs used for a particular disease state would have provided more appropriate comparisons. While we agree that such analyses could be useful, the scope of our work was targeted to a comparison of prices paid by each agency for a sample of high-utilization and high-expenditure drugs and was not intended to capture all factors that can affect pharmaceutical spending. We noted the limitations of our results in the report, including that our results cannot be applied to all drugs purchased by the agencies. Further, our report acknowledges the limitations involved with estimating potential cost savings in this complex area. Nonetheless, we maintain that comparing unit prices paid for selected generic and brand-name drugs by different federal agencies has value in identifying specific drugs with price differences that may warrant further consideration for potential savings. VA also noted that most of its drug purchases are made through contracts with a prime vendor that provides a negative distribution fee (i.e., discount), resulting in savings to VA. We revised the introduction to make this information more prominent earlier in the report. VA agreed with our conclusion that the maximum potential savings provided in our findings are unlikely to be achieved and noted that obtaining lower prices on brand-name drugs would require shifting utilization away from generic drugs, potentially increasing overall drug costs. VA also stated that our list of factors affecting the prices each agency is able to obtain did not specifically include the ability of VA to direct utilization toward a limited number of drugs within a therapeutic class to achieve savings. We revised the report to clarify this point. VA also stated that, under an applicable Federal Acquisition Regulation (48 C.F.R. § 8.002), it was required to purchase more expensive versions of generic minocycline through the FSS contract rather than versions otherwise available. VA therefore requested that GAO remove the potential savings related to the purchase of this drug from the total projected savings in the report. However, our report generally reflects a number of factors (including differences in purchasing authority) that may limit each agency’s ability to achieve the maximum potential savings we calculated, and it was beyond the scope of our report to apply these factors to each individual drug. DOD and VA also provided technical comments, which we incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to relevant congressional committees and other interested members. The report also will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-7114 or dickenj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made key contributions to this report are listed in appendix V. In order to compare direct purchase prices paid by the Department of Defense (DOD) and Department of Veterans Affairs (VA) for prescription drugs, we chose a sample of drugs important to both agencies. We obtained prime vendor data for the first calendar quarter of 2012 for drugs dispensed to DOD and VA beneficiaries through the agencies’ own medical facilities and mail order pharmacies. We excluded physician- administered outpatient prescription drugs and over-the-counter drugs from our sample; we also excluded items that are not traditionally considered drugs such as bandages, syringes, needles, diabetes test strips, saline, and water for irrigation. We used data from Red Book to determine the brand-name or generic status of each drug. Utilization was determined using the National Council for Prescription Drug Programs (NCPDP) Billing Unit Standard. When calculating expenditures, we used the agencies’ costs to purchase each drug without accounting for any future offsets from beneficiary copayments for those drugs. We aggregated the utilization and expenditure data at the drug level (drug name, strength, and dosage form) separately for DOD and VA. For example, all national drug codes (NDC) corresponding to 10 mg tablets of Lipitor purchased by VA were aggregated and the associated utilization and expenditures were summed and compared to other brand-name drugs, while NDCs corresponding to 10 mg tablets of atorvastatin (the generic equivalent of Lipitor) purchased by VA were aggregated separately and compared to other generic drugs. We ranked the top 100 brand-name and top 100 generic drugs separately for each agency on the basis of utilization and expenditures and then combined these rankings to determine the top brand-name and generic drugs that were purchased by both agencies. We excluded drugs that were in the top 100 for one agency but not for the other agency in order to focus our analysis on drugs that were important for both DOD and VA. For example, if a drug was 10th highest in utilization for VA but was not in the top 100 for DOD, that drug would not be included in our sample. Some drugs that were excluded from the sample appeared to be more appropriate for the beneficiary population of one agency than the other. For example, primaquine phosphate—a drug used to treat malaria—was the fourth-highest-expenditure generic drug for DOD but was not in the top 100 for VA. Some other excluded drugs were additional strengths of drugs that did appear in our sample. For example, simvastatin (80 mg tablet) was the sixth-highest-expenditure generic drug for VA but was not in the top 100 for DOD and thus was excluded from the sample. However, the 20 mg and 40 mg strengths of simvastatin were included in the sample. The drug sample was selected to include the top 50 brand-name and top 50 generic drugs; 25 of the brand-name drugs and 25 of the generic drugs were selected on the basis of the combined DOD and VA utilization ranks, and the other 25 brand-name and 25 generic drugs were selected on the basis of the combined DOD and VA expenditure ranks. After accounting for drugs that were in both the high-expenditure group and the high-utilization group, the final sample contained 43 brand-name drugs and 40 generic drugs and accounted for 37.0 percent of DOD utilization, 31.7 percent of DOD expenditures, 27.7 percent of VA utilization, and 34.8 percent of VA expenditures for directly purchased prescription drugs in the first calendar quarter of 2012. (See table I for a list of the drugs in the sample.) After selecting the sample, we calculated average unit prices paid by DOD and VA for all individual drugs by dividing total expenditures by total utilization for each drug. We also calculated average unit prices for the entire sample, the subset of brand-name drugs, and the subset of generic drugs by dividing the total expenditures for all relevant drugs by the total utilization of those drugs. In order to maintain the confidentiality of drug prices, in each case we converted from absolute prices to relative prices by assigning 100.0 to the lowest price and determining the higher price as a percentage above the lowest price. We compared the average unit prices obtained by DOD and VA for each drug to the Federal Supply Schedule (FSS) and Big Four prices available to these agencies. We interviewed DOD and VA officials about drug purchasing approaches they use and factors affecting the prices they are able to obtain. Finally, we determined the maximum potential savings that might have been obtained if each agency had been able to obtain the lower of the DOD and VA average unit prices for each of the 83 drugs in the sample. The results of our analyses are limited to the 83 high-utilization and high- expenditure drugs in our sample for the first calendar quarter of 2012 and are not necessarily applicable across all drugs. Figure 3 shows the 55 drugs (out of 83 drugs in our sample) for which the Department of Defense (DOD) paid a higher average unit price than the Department of Veterans Affairs (VA) and the percentage by which the DOD price exceeded the VA price. Figure 4 shows the 28 drugs (out of 83) for which VA paid a higher average unit price than DOD and the percentage by which the VA price exceeded the DOD price. In addition to the contact named above, key contributors to this report were Robert Copeland, Assistant Director; Zhi Boon; Karen Howard; Laurie Pachter; and Carmen Rivera-Lowitt. DOD and VA Health Care: Medication Needs during Transitions May Not Be Managed for All Service Members. GAO-13-26. Washington, D.C.: November 2, 2012. 2012 Annual Report: Opportunities to Reduce Duplication, Overlap and Fragmentation, Achieve Savings, and Enhance Revenue. GAO-12-342SP. Washington, D.C.: February 28, 2012. Follow-Up on 2011 Report: Status of Actions Taken to Reduce Duplication, Overlap, and Fragmentation, Save Tax Dollars, and Enhance Revenue. GAO-12-453SP. Washington, D.C.: February 28, 2012. Opportunities to Reduce Potential Duplication in Government Programs, Save Tax Dollars, and Enhance Revenue. GAO-11-318SP. Washington, D.C.: March 1, 2011. VA Drug Formulary: Drug Review Process Is Standardized at the National Level, but Actions Are Needed to Ensure Timely Adjudication of Nonformulary Drug Requests. GAO-10-776. Washington, D.C.: August 31, 2010. Prescription Drugs: Overview of Approaches to Control Prescription Drug Spending in Federal Programs. GAO-09-819T. Washington, D.C.: June 24, 2009. Military Health Care: TRICARE Cost-Sharing Proposals Would Help Offset Increasing Health Care Spending, but Projected Savings Are Likely Overestimated. GAO-07-647. Washington, D.C.: May 31, 2007. Prescription Drugs: An Overview of Approaches to Negotiate Drug Prices Used by Other Countries and U.S. Private Payers and Federal Programs. GAO-07-358T. Washington, D.C.: January 11, 2007. | In fiscal year 2012, DOD and VA spent a combined $11.8 billion to purchase drugs on behalf of about 18.5 million beneficiaries. Both agencies purchase drugs directly from manufacturers via prime vendors--intermediaries that provide the drugs at a discount off the lowest price that would otherwise be available. The agencies dispense these drugs to beneficiaries through their medical facilities and pharmacies, including their mail order pharmacies. GAO was asked to compare prices paid for prescription drugs across federal programs. This report describes direct purchase prices paid by DOD and VA for a sample of prescription drugs. GAO will compare drug prices paid using other approaches and by other federal programs in future work. Using prime vendor data provided by these agencies for the first quarter of 2012, GAO selected a sample of high-utilization and high-expenditure drugs important to both DOD and VA and compared average unit prices paid by these agencies for those drugs. The sample contained 43 brand-name and 40 generic drugs and accounted for 37 percent of DOD utilization, 32 percent of DOD expenditures, 28 percent of VA utilization, and 35 percent of VA expenditures for directly purchased drugs in that quarter. GAO calculated average unit prices by dividing total expenditures by total utilization for each drug, the entire sample, and the subsets of brand-name and generic drugs. GAO also compared DOD and VA average unit prices to the FSS and Big Four prices for each drug. GAO interviewed DOD and VA officials about their drug purchasing approaches and factors affecting the prices they are able to obtain. When GAO compared prices paid by the Department of Defense (DOD) and the Department of Veterans Affairs (VA) for a sample of 83 drugs purchased in the first calendar quarter of 2012, DOD's average unit price for the entire sample was 31.8 percent ($0.11 per unit) higher than VA's average price, and DOD's average unit price for the subset of 40 generic drugs was 66.6 percent ($0.04 per unit) higher than VA's average price. However, VA's average unit price for the subset of 43 brand-name drugs was 136.9 percent ($1.01 per unit) higher than DOD's average price. These results were consistent with each agency obtaining better prices on the type of drugs that made up the majority of its utilization: generic drugs accounted for 83 percent of VA's utilization of the sample drugs and brand-name drugs accounted for 54 percent of DOD's utilization of the sample drugs. DOD officials told GAO that in certain circumstances they are able to obtain competitive prices for brand-name drugs--even below the prices for generic equivalents--and therefore will often preferentially purchase brand-name drugs. At the individual drug level, DOD paid higher average unit prices than VA for 32 of the 40 generic drugs and for 23 of the 43 brand-name drugs in the sample, while VA paid higher average unit prices for the remaining 8 generic drugs and 20 brand-name drugs. In nearly every case, substantially higher prices paid by one agency were correlated with substantially lower utilization by that agency. Specifically, for 10 of the 11 drugs for which one agency paid more than 100 percent above the price paid by the other agency, the agency that paid a substantially higher price also had substantially lower utilization. However, even when one agency paid a substantially higher price than the other, in all 11 cases both agencies paid less than the highest of the Federal Supply Schedule (FSS) prices available to all direct federal purchasers or the Big Four prices available to the four largest government purchasers. Additionally, in most cases (9 out of 11 drugs) both agencies paid less than the lowest of these prices. The lower prices obtained by one agency may be due to factors such as differences in the agencies' formulary design and prescription practices, price and rebate negotiations with manufacturers that may not be available more broadly to the other agency, and differences in utilization practices between the agencies based on differences in their beneficiary populations. DOD and VA face continued challenges in controlling drug costs. While the prescription drug market is complex and there are many factors affecting the prices DOD and VA are able to obtain for directly purchased drugs, differences in prices paid for specific drugs may provide insights into opportunities for each agency to obtain additional savings on at least some of the drugs they purchase. In commenting on a draft of this report, DOD generally agreed with GAO's findings and described additional factors that may contribute to differences in prices paid by DOD and VA. VA expressed concerns with the content of the report. VA suggested additional analyses and highlighted the impact of program design on each agency's use of prescription drugs. GAO maintains that its analyses have value in identifying opportunities for savings and the report acknowledges the limitations involved with estimating potential cost savings in this complex area. DOD and VA also provided technical comments that GAO incorporated as appropriate. |
Floods are the most common and destructive natural disaster in the United States. However, flooding is generally excluded from homeowners insurance policies, which typically cover damages from other losses, such as wind, fire, and theft. Because of the catastrophic nature of flooding, the difficulty of adequately predicting flood risks, and uncertainty surrounding the possibility of charging actuarially sound premium rates, private insurance companies have historically been largely unwilling to underwrite flood insurance. NFIP, which makes federally backed flood insurance available to residential property owners and businesses, was intended to reduce the federal government’s escalating costs for repairing flood damage after disasters. Under NFIP, the federal government currently assumes the liability for the insurance coverage and sets rates and coverage limitations, among other responsibilities, while private insurers sell the policies and administer the claims for a fee determined by FEMA. NFIP is managed by FEMA’s Federal Insurance and Mitigation Administration, which is responsible for administering programs that provide assistance for mitigating future damages from natural hazards. Some private insurers provide coverage for flood insurance above the limit of NFIP coverage, generally referred to as excess flood insurance. Further, NFIP policies do not provide coverage for business interruption or additional living expenses, which currently are available through some private insurers. Community participation in NFIP is voluntary, but communities must join NFIP and adopt and enforce FEMA-approved building standards and floodplain management strategies in order for their residents to purchase flood insurance through the program. Additionally, communities in Special Flood Hazard Areas (SFHA)—areas subject to a 1 percent or greater chance of flooding in any given year—must participate in NFIP for property owners to be eligible for any aid in connection with a flood, including disaster assistance loans and grants for acquisition or construction purposes. Participating communities agree to enforce regulations for land use and new construction in high-risk flood zones and to adopt and enforce state and community floodplain management regulations to reduce future flood damage. Participating communities can receive discounts on flood insurance if they establish floodplain management programs that go beyond NFIP’s minimum requirements. FEMA can suspend communities that do not comply with the program, and communities can withdraw from it by submitting a copy of a legislative action stating its desire to withdraw from NFIP. As of May 2013, about 22,000 communities voluntarily participated in NFIP. NFIP has mapped flood risks across the country, assigning flood zone designations based on risk levels, and these designations are a factor in determining premium rates. To help reduce or eliminate the long-term risk of flood damage to buildings and other structures insured by NFIP, FEMA has used a variety of mitigation efforts, such as elevation, relocation, and demolition. Despite these efforts, the number of repetitive loss properties—generally, those that have had two or more flood insurance claims payments of $1,000 or more over 10 years—has continued to grow. NFIP policies have what FEMA describes as either subsidized or full-risk premiums. The type of policy and the subsequent rate a policyholder pays depend on several property characteristics—for example, whether the structure was built before or after a community’s FIRM was issued and the location of the structure in the floodplain. Structures built after a community’s FIRM was published must meet FEMA building standards, and the property owner must pay full-risk rates, which reflect FEMA’s estimates of the actual risk of flooding. Post-FIRM structures are generally less flood prone than pre-FIRM properties because they have been built to flood-resistant building codes or mitigation steps have been taken to reduce flood risks. Subsidized rates do not reflect the estimated total flood risk and instead are highly discounted. Even with highly discounted rates, subsidized premiums are, on average, higher than full-risk premiums because subsidized pre-FIRM structures generally are more prone to flooding (that is, riskier) than other structures. In general, pre-FIRM properties were not constructed according to the program’s building standards or were built without regard to base flood elevation—the level relative to mean sea level at which there is an estimated 1 percent or greater chance of flooding in a given year. For example, the average annual subsidized premium with October 2011 rates for pre-FIRM subsidized properties was about $1,224, while the average annual premium for post-FIRM properties paying full-risk rates was about $492. Flooding disasters of the 1920s and 1930s led to federal involvement in protecting life and property from flooding, with the passage of the Flood Control Act of 1936. Generally, the only available financial recourse to assist flood victims was postdisaster assistance. When flood insurance was first proposed in the 1950s, it became clear that private insurance companies could not profitably provide flood coverage at a price that consumers could afford, primarily because of the catastrophic nature of flooding and the difficulty of determining accurate rates. In 1965 Congress passed the Southeast Hurricane Disaster Relief Act that provided financial relief for victims of flooding. In addition, the act mandated a feasibility study of a national flood insurance program, which helped provide the basis for the National Flood Insurance Act of 1968 that created NFIP. From 1969 through 1977, the Department of Housing and Urban Development (HUD), which administered NFIP at the time, had an agreement with a consortium of private insurers known as the National Flood Insurers Association. Under this agreement, HUD reimbursed the association of insurers for operating costs and provided an annual operating allowance equal to 5 percent of policyholders’ premiums. HUD ended the partnership in 1978 and converted NFIP to a government- operated program because it could not come to an agreement with the private insurers on issues such as HUD’s right to approve their operating budgets and its authority over policy decisions and regulations. HUD also estimated $15 million in cost savings by ending the partnership. The insurers wanted to continue the partnership, but the HUD Secretary decided to use her statutory authority to convert NFIP to a government program. In 1978, we determined that the partnership had not reached a last resort status and that the potential for a new agreement existed. The Flood Disaster Protection Act of 1973 made the purchase of flood insurance mandatory for owners of properties in special flood hazard areas that are secured by mortgages from federally regulated lenders and provided additional incentives for communities to join the program. The National Flood Insurance Reform Act of 1994—which amended the 1968 act and the 1973 act—strengthened the mandatory purchase requirements for owners of properties located in SFHAs with mortgages from federally regulated lenders. The Bunning-Bereuter-Blumenauer Flood Insurance Reform Act of 2004 authorized a pilot program to encourage owners of properties that suffer from repeated flood losses to take steps to reduce the risk of damage, known as mitigation. Owners of these “severe repetitive loss” properties who refuse an offer to mitigate the risks face higher premiums. Finally, in 2012 the Biggert-Waters Act reauthorized the program through 2017 and removed subsidized rates for a number of insured properties, such as residential properties that are not an individual’s primary residence, severe repetitive loss properties, business properties, and properties that had received payments for flood-related damage that cumulatively equaled or exceeded the property’s fair market value. The Biggert-Waters Act also included several other provisions, such as the following. Rates that fully reflect flood risk for specified properties are to be phased in over several years—with increases of 25 percent each year—until the average risk premium rate for these properties equals the average of the full-risk premium rates for all properties in that risk classification. Properties will no longer qualify for subsidies if the policyholder has deliberately chosen to let the policy lapse and if a prospective insured refuses to accept any offer of mitigation assistance (including relocation) following a major disaster. Properties that did not have NFIP insurance when the act was enacted and properties purchased after that date will not receive subsidies. Subsidized properties that are sold will lose their subsidies. FEMA must adjust rates to accurately reflect the current risk of flooding to properties when an area’s flood map is changed. FEMA is determining how this provision will affect properties that were grandfathered into lower rates. In addition, the act allows average premium increases of 20 percent annually by risk class (the previous cap was 10 percent), establishes minimum deductibles, requires FEMA to establish a reserve fund, and requires FEMA to include losses from catastrophic years in determining premiums that are based on the “average historical loss year,” among other things. The potential adverse effects on certain property owners of the premium rate increases arising out of the Biggert-Waters Act and the possibility of delaying some rate increases have been the subject of congressional hearings and recent legislative proposals. We previously identified goals for federal involvement in natural catastrophe insurance programs. These goals can be adapted to help evaluate strategies for increasing private sector involvement in flood insurance. The goals include: charging premium rates that fully reflect estimated risks, encouraging private markets to provide flood insurance, encouraging property owners to buy flood insurance, and limiting costs to taxpayers before and after a flood. The four-goal framework captures the public policy goals for providing insurance against natural catastrophes such as flooding. Stakeholders with whom we spoke generally agreed that these goals were appropriate. Stakeholders with whom we spoke or who participated in our roundtable discussion identified several conditions that would be needed to increase private sector involvement in flood insurance. First, private insurers would have to perceive floods as an insurable risk that they could profitably cover—that is, they would need to be able to estimate both the frequency and severity of future losses with some accuracy. Second, they would need the freedom to charge adequate rates and decide which applicants they would insure. Third, private insurers would need adequate consumer participation in order to manage and diversify their risk. Private insurers generally cover only what they see as insurable risks whose frequency and severity they can estimate with some accuracy. Being able to calculate the average frequency and severity of future losses enables insurers to set premium rates that are likely to be sufficient to pay all claims and expenses and yield a profit. For this reason, homeowners policies do not cover a variety of risks, such as flooding, which tend to occur unexpectedly and can cause a devastating amount of damage. NFIP was created in part because the catastrophic nature of flooding made it difficult for private insurance companies to develop an actuarial rate structure that could adequately reflect the risks to flood-prone properties. The program is the only provider of affordable flood insurance for most U.S. homeowners. Stakeholders indicated that private insurers would need more information and more sophisticated modeling to assess flood risk before they could begin providing flood insurance. Risk modelers with whom we spoke questioned the reliability of FEMA’s flood risk zones, which in many areas, such as along coastlines, were determined using a less sophisticated methodology than what is available today. For example, one risk modeler said that FEMA’s base flood elevations were likely too low in many places and that many structures across the country were at higher flood risk than the flood maps indicated. According to one insurer, determining flood risk for commercial policyholders involves reviewing specific information, including FEMA flood maps and satellite imagery, to determine a structure’s flood zone, proximity to other flood zones, and elevation. This insurer uses geographical address information to determine flood risk on a building-by-building basis based on a range of factors that include the structural elements of a building and its contents. One risk modeler suggested that FEMA should design flood risk maps for future building stock and should model both current and future flood levels; however, the risk modeler said that determining future flood risks was a challenge and that the industry lacked consensus on the methodology that should be used. Stakeholders anticipated that risk modeling firms would be releasing new flood models in the next several years, providing the tools that private insurers would need to evaluate flood risk. Stakeholders also noted that a private market for flood insurance would likely create a market for modeling flood risk, attracting many companies to fill that need. Stakeholders said that if other conditions for private sector involvement in flood insurance were met, more risk-modeling companies or private insurers would begin developing models that private insurers could use to determine risks more accurately. For example, one risk modeler said risk modelers could determine how different bodies of water, such as two rivers, would interact and contribute to flood events—information that FEMA’s flood maps do not include. Stakeholders said that in addition to advanced computer modeling, access to NFIP policy and claims data would help private insurers assess flood risks and determine which properties they might be willing to insure. However, FEMA officials said the agency would need to address privacy concerns to provide property-level information to insurers, because the Privacy Act prohibits the agency from releasing detailed NFIP policy and claims data. The Privacy Act governs how federal agencies may use the personal information that individuals supply when obtaining government services or fulfilling obligations. FEMA officials said that while the agency could release data in the aggregate, some information could not be provided in detail. For example, FEMA could provide zip-code level information to communities but would need to determine how to release property-level information while protecting the privacy of individuals. Stakeholders said that private insurers would also need to be able to charge adequate rates that would reflect the full estimated risk of flood loss and allow for profit. In our prior work, one of the public policy goals we adapted for evaluating options for increasing private sector involvement in flood insurance was charging premium rates that fully reflected estimated risks. Actuarially sound rates determined by private insurers would differ from NFIP rates in that they would be calculated to account for potential losses, reflect the cost of capital to cover potential catastrophic losses, and provide a reasonable return for investors. As a result, these rates would be higher than present NFIP rates for many properties and could present affordability challenges for consumers. One stakeholder said higher prices could lead some homeowners to purchase lower amounts of coverage or choose not to purchase flood insurance at all. Further, another stakeholder said higher flood insurance rates could affect property owners’ home value and ability to sell their property. For example, one stakeholder said that potential buyers might decide not to purchase a home in a high-risk area after determining the cost of flood insurance for the property. Stakeholders said that the political environment could prevent insurers from setting adequate rates. For example, they expressed concerns that efforts to increase flood insurance rates would likely face public resistance or be politically unpopular. Stakeholders said it was challenging for private insurers to gain enough confidence to enter the flood insurance market because they feared not being able to charge actuarially sound rates or obtain a reasonable rate of return. For example, stakeholders said that state by state approval of flood insurance rates might impede insurers’ ability to obtain adequate rates. An insurer said that most state insurance regulators lacked knowledge about flood policies, but a state regulation official said that most states took a measured approach to rate regulation, with an eye toward allowing insurers to earn a reasonable profit. Further, one stakeholder said that private insurers would need flexibility to account for potential climate change effects when pricing flood, hurricane, and other risks associated with sea level rise. Although potential climate change effects would have a long time horizon, the stakeholder suggested it was an issue to consider regarding the regulatory environment for private insurers. In addition, stakeholders said that private insurers would need freedom in underwriting policies so that they could accept and reject applicants as necessary to manage their risk portfolios. Insurable risks have certain characteristics that make providing coverage possible. As well as being estimable, for example, loss exposure should not be potentially catastrophic for the insurer. Freedom to manage risk would help insurers manage potentially catastrophic losses and further encourage their participation in the flood insurance market. For example, insurers might determine that they needed to limit the number of policies in a geographic area because a single flood event could result in losses on many of those policies at the same time. Stakeholders said incentives would be needed to encourage insurers to assume greater risk, particularly in flood-prone areas. One stakeholder said the political environment could limit insurers’ ability to accept and reject applicants as necessary to manage their risk exposure. Further, stakeholders said that different insurance regulations across states could further complicate insurers’ ability to underwrite flood insurance. One stakeholder said that insurers would need clarity on the political and regulatory environment before entering the flood market. Insurers need to be able to manage their risk exposure by having a large, diverse risk pool with premiums at a level that property owners are willing and able to pay. Having a large and diversified risk pool would enable an insurer to better estimate losses based on loss data it collected over time and to spread the losses over a large number of properties. Economically feasible premiums would provide an opportunity or incentive for property owners to obtain coverage. Further, in our prior work, one of the public policy goals we adapted for evaluating strategies for increasing private sector involvement in flood insurance was encouraging broad consumer participation in the flood insurance market. We previously have found that efforts to encourage broader participation in NFIP could reduce costs, depending on how they were implemented. Likewise, a large risk pool could help private insurers manage their exposure. Broad consumer participation in the market would also be necessary to address adverse selection—the phenomenon that occurs when only those most in need typically purchase insurance, creating a pool of only the highest-risk properties. In this case, insurers must be confident that homeowners other than those in the highest-risk areas will obtain flood insurance, because adverse selection can hamper an insurer’s efforts to manage its risk. A 2006 study estimated that NFIP participation rates were as low as 50 percent in SFHAs, where property owners with loans from federally insured and regulated lenders were required to purchase flood insurance. The study also found that participation rates outside of SFHAs were as low as 1 percent. Another study found that homeowners both within and outside SFHAs who did obtain flood insurance when purchasing their homes typically kept it for 2 to 4 years before canceling their policies. Stakeholders said that homeowners were more likely to purchase flood insurance immediately following a flood event and to drop it later as their perception of their flood risk decreased. Homeowners that do not purchase flood insurance, including many residing in SFHAs, make that decision for a number of reasons that would have to be addressed in order to make private flood insurance possible. Stakeholders said that affordability was one of the key challenges to providing flood insurance. In addition, stakeholders said that affordability could be a particularly difficult issue for low- and moderate-income homeowners, as evidenced by complaints surrounding rate increases under the Biggert-Waters Act. Further, stakeholders said that not all homeowners would be able to afford flood insurance at rates that private insurers would consider adequate for reflecting the full estimated risk of flood loss and allowing for profit. Based on our analysis of stakeholder views and other information, many property owners may also have an inaccurate perception of their risk of flooding and thus do not buy flood insurance. For example, a 2012 study suggested that some property owners believe that only properties in SFHAs are in a flood zone and that properties located outside of SFHAs are not at risk of flooding. Stakeholders said that it was difficult to convince homeowners to pay for coverage for an unlikely event, despite the potential for severe damage. The former definition of SFHAs as a “100-year” flood zone has contributed in part to this misperception, given the long time horizon, so that some property owners have assumed that after experiencing a flood loss, their properties would be free from flooding for the next 100 years. And one stakeholder indicated that many consumers mistakenly assumed that their homeowners insurance policies included flood coverage. In addition, a banking association with whom we spoke said that some lending institutions did not see flood risk as a threat to their safety and soundness and therefore often did not require flood coverage at mortgage origination, potentially contributing further to homeowners’ misperception of flood risk. Finally, stakeholders suggested that many consumers did not obtain flood insurance because they assumed they would receive federal or state disaster assistance after a flood event. However, federal disaster assistance to individuals is limited. Disaster assistance is administered through several federal programs and is generally made available only after the President issues a disaster declaration, but homeowners seeking such assistance must first seek assistance from their flood insurance policy. Further, while one federal grant program is available— the Individuals and Households Program—federal disaster assistance to individuals and businesses for the repair or replacement of structures consists primarily of federal loan programs. These include loans from the U.S. Small Business Administration that are available to all first-time applicants but to repeat applicants only if they have flood insurance. Homeowners can apply for up to $200,000 in home and property disaster loans for the repair or replacement of a primary residence to its predisaster condition. Business disaster loans of up to $2 million are the primary form of federal assistance for the repair and rebuilding of nonfarm, private sector disaster losses. One stakeholder said many homeowners without flood insurance would not qualify for individual assistance or loans from the U.S. Small Business Administration and that homeowners might not have options—other than filing for bankruptcy—to recover financially from a flood event. We previously reported that while the federal government has provided significant financial assistance after major disasters, the federal role is primarily to assist state and local governments, which have the central role in recovery efforts. Based on our analysis, all of these issues would need to be addressed to create the conditions that would encourage private insurers to consider providing flood insurance. Addressing them would be a complex task and would require difficult trade-offs. For example, raising rates beyond what FEMA currently charges could create significant hardship and put at risk the homes of those who could not afford flood insurance. Further, it could be a challenge to encourage property owners to purchase flood insurance, particularly when many of them do not believe they are exposed to the risk of flooding. Stakeholders with whom we spoke or who participated in our roundtable discussion identified several strategies that could be used to help transfer some of the responsibility of providing flood insurance from the federal government to the private sector. These strategies, or certain aspects of them, could be used jointly to promote the conditions that stakeholders said would be necessary for private sector involvement—the ability to assess risk, the freedom to charge adequate rates and manage risk, and adequate consumer participation. These strategies serve only as broad potential frameworks, and because of the complexity of providing flood insurance, the success of any reform effort would also depend on how it is structured and implemented. Further, stakeholders said that any strategy will likely require certain roles for federal, state, and local government entities. One strategy stakeholders identified, which we have also mentioned in previous reports, would be for Congress to eliminate subsidized rates, charge full-risk rates to all policyholders, and appropriate funding for a direct means-based subsidy to some policyholders. A second strategy that stakeholders identified would be for the federal government to provide only residual insurance, serving as the insurer of last resort for properties that the private sector is unwilling to insure. Alternatively, a third strategy would be for the federal government to serve as a reinsurer and charge private insurers a premium for the federal government to assume risk for losses that exceed a predetermined amount. In addition to these strategies, stakeholders proposed others, including mandatory coverage, reinsurance for NFIP, and catastrophe bonds. Stakeholders proposed eliminating all subsidized rates and charging all policyholders rates that reflected the full estimated risk of flooding, with Congress providing a direct means-based subsidy to some policyholders. Stakeholders generally agreed that any subsidies should be explicit and provided directly to the policyholder instead of hidden in a discounted premium rate, partly because such hidden subsidies conceal a property’s actual flood risk and encourage development in high-risk areas. While the premium levels may be sufficient to cover claims in years with lower losses, the subsidies result in insufficient premium revenue over the long term to cover years with higher losses. As a result, the cost of subsidies is disguised from taxpayers and evident only in FEMA’s need to borrow from Treasury. Making the subsidies explicit would require Congress to appropriate funds for them, increasing transparency by showing the exact annual cost of the subsidies. Such subsidies could require determining eligibility requirements—for example, a means test—as well as the amount of subsidies. We suggested that Congress consider this approach in a previous report and this strategy continues to be an option that could offer benefits to the program and could be implemented independent of any increase in private sector involvement in flood insurance. Stakeholders said that removing the hidden subsidies and charging full- risk rates to all policyholders has a number of advantages. For example, demonstrating the political will to charge full-risk rates within NFIP could signal to private insurers a greater likelihood of being allowed the freedom to charge adequate rates in a private flood insurance market, thus encouraging their potential participation. However, stakeholders expressed concerns over discussions of proposals to delay rate increases, specifically those authorized by the Biggert-Waters Act. These stakeholders said that such delays would increase private insurers’ skepticism about the feasibility of participating in a private flood insurance market. Although raising rates could create affordability concerns for some, delaying the increases could reduce the chances of increasing private sector involvement in flood insurance, leaving taxpayers to continue paying for flood claims through future borrowing from Treasury. Based on our analysis, providing means-tested subsidies to some property owners would allow Congress to address affordability concerns associated with premium rate increases. Currently, subsidies are available regardless of a property owner’s ability to afford a full-risk premium. Means testing the subsidies would ensure that only those who could not afford full-risk rates would receive assistance and should increase the amount in premiums NFIP collects to cover losses. Collecting more in premiums by providing subsidies to fewer policyholders would reduce taxpayer costs, one of the goals for reforming catastrophe insurance. Finally, charging full-risk rates to all policyholders would demonstrate a property’s actual flood risk to property owners, discourage further development in high-risk areas, and encourage property owners to invest in mitigation to lower their exposure to flood risk as well as their premium rate. However, while means-tested subsidies could make premium rates more affordable, they could also decrease a property owner’s incentive to mitigate. Stakeholders raised other concerns about increases in flood insurance premium rates. One stakeholder noted that rate increases could lower a home’s market value because the cost of owning the home would rise. Further, stakeholders said that some communities with a high risk of flooding could become economically unviable if premium rate increases made flood insurance unaffordable for too many residents. For example, stakeholders as well as participants in congressional hearings have said that rate increases in some high-risk areas could make it unaffordable for many homeowners within a community to stay in their homes, which could lead to declining property values for homes and businesses. Premium rate increases could also cause some property owners to cancel their coverage or opt not to purchase it, particularly those who were not already mandated to purchase flood insurance. Reduced participation could negate some of the benefits of providing the targeted subsidies. Means-based subsidies could soften some of these potential effects, but any solution will need to consider whether other steps would be necessary to limit adverse effects on particular communities. Finally, the Biggert-Waters Act eliminates the transfer of subsidies when homes are sold and the renewal of subsidized policies if a policyholder deliberately chooses to allow flood coverage to lapse. We previously have reported that the continuing implementation of the act is expected to decrease the number of subsidized policies. As the number of subsidized policies falls, NFIP’s premium shortfall will decrease, helping its financial condition. Means-based subsidies could provide greater up- front savings by limiting the number of policyholders that would be eligible for the subsidies. However, there could be a point when the cost of continued means-based subsidies could exceed the level of subsidies that will otherwise exist as homes are sold or coverage lapses. Some stakeholders suggested that targeted subsidies should be temporary to help address this concern. According to stakeholders, another strategy for increasing the private sector’s role in flood insurance is to have the federal government serve as the insurer of last resort. This strategy would give private insurers the opportunity to provide flood insurance to most property owners who desire it, and the federal government would offer coverage only to the highest-risk properties that private insurers were unwilling to underwrite. Stakeholders said that particularly in the early years, private insurers might be conservative regarding the properties they were willing to insure, but many might be willing to insure more properties as they grew comfortable with underwriting flood coverage. Some states have similarly structured residual insurance programs for other perils that could provide insight and offer lessons in structuring such a program for flood insurance. For example, representatives from state residual insurance programs with whom we spoke said their programs charged rates that are higher than what would be available in the private market to ensure they remain insurers of last resort and discourage property owners from using it as an alternative to private sector coverage. This strategy could help create the conditions we identified earlier in this report for increasing private sector involvement. To address private sector concerns about being able to accurately set rates, FEMA could grant private insurers access to historical claims data. However, as noted earlier FEMA would need to ensure that any disclosure complies with the Privacy Act. Further, the premium rates for residual coverage should be above what private market rates would be for comparable coverage to discourage property owners from using it as an alternative to private coverage. Because this strategy would transfer many of NFIP’s policies to the private sector, NFIP’s total exposure would likely decrease. To the extent that the residual policies sold by NFIP were priced at full-risk rates, long- term taxpayer costs could be reduced. However, while full-risk rates would reduce taxpayer costs in the long-term, the average cost and volatility of the program’s remaining policies would increase, because NFIP would be left with the highest-risk policies. Further, because these residual policies would have the highest risk, they would require high premium rates to cover their full risk of loss, potentially reducing consumer participation. Combining this option with means-based subsidies could help address affordability concerns and maintain consumer participation. According to stakeholders, a third strategy for increasing private sector involvement in flood insurance would be for the federal government to provide reinsurance to private insurers. Specifically, the federal government could provide a backstop for private insurers by agreeing to pay the difference when total claims exceeded a certain amount within a specified period. To fund reinsurance claims for catastrophic losses, the federal government could collect premiums from private insurers. Some stakeholders said that the private reinsurance industry might be able to provide some reinsurance to private insurers, reducing the federal government’s potential role in reinsurance. However, other stakeholders expressed concerns that private reinsurers might raise premium rates or cancel coverage after a catastrophic loss year, creating uncertainty for private insurers that could affect their willingness to enter the flood insurance market. Stakeholders said that the risk of insolvency in the event of catastrophic flood losses might make some private insurers hesitant to enter the flood insurance market and that providing reinsurance could help address these concerns and encourage their participation. If the federal government collected adequate reinsurance premiums, this strategy could also reduce costs to taxpayers, one of the goals for reforming natural catastrophe insurance, because most of the flood risk would be transferred to the private sector. However, the costs of these reinsurance premiums would likely be passed onto the policyholder, and the resulting higher rates could reduce consumer participation. Once again, means- based subsidies could help soften these rate increases and maintain consumer participation. Stakeholders identified several other strategies that could help encourage private sector involvement in flood insurance. Mandatory coverage. In particular, some stakeholders said that a federal mandate could help achieve the level of consumer participation necessary to make the private sector comfortable with providing flood insurance coverage. For example, some stakeholders said that the federal government could mandate that homeowners insurance policies include flood coverage or that all homeowners purchase flood insurance. Either mandate could increase the number of homeowners purchasing flood insurance, something that could help private insurers diversify and manage the risk of their flood insurance portfolio and address concerns about adverse selection. However, the stakeholders were also concerned that private companies might oppose a mandate that homeowners policies include flood coverage, potentially raising legal issues. Further, stakeholders said that some property owners— particularly those who perceived their flood risk to be low—might also resist being required to purchase flood insurance. Finally, as we have reported the federal government has faced challenges with enforcing the current mandatory purchase requirement for flood insurance, which applies to properties in high-risk areas with federally regulated mortgages. If the federal government faces similar challenges with additional mandates, it might not realize the intended benefits of increased consumer participation. Reinsure NFIP. Other stakeholders discussed NFIP purchasing reinsurance from the private sector to cover exposure to catastrophic losses rather than relying on borrowing from Treasury. While doing so would increase private sector involvement in flood insurance, the federal government would still play an active role in flood insurance, because NFIP would still serve as the main provider of primary flood insurance coverage. However, the federal government’s exposure to catastrophic losses could be reduced. On the other hand, stakeholders noted that private companies might cancel reinsurance after a catastrophic loss year, leading to uncertainty as to the degree to which the federal government’s exposure would be mitigated. Further, stakeholders said that NFIP may need to collect additional premiums on its existing policies to pay for the reinsurance. However, given that NFIP’s current premium levels are insufficient, NFIP may be unable to pay for this additional expense. Catastrophe bonds. Stakeholders said that as an alternative to reinsurance, NFIP might be able to limit its exposure to large losses by transferring risk to capital markets through insurance-linked securities such as catastrophe bonds. For example, NFIP could issue interest- bearing bonds to investors willing to bear the risk of losing some of their investment if flood claims exceeded a predetermined amount. Like reinsurance, catastrophe bonds could help NFIP manage its risk exposure, but again it would need to be able to collect adequate premiums to cover any necessary payments of principal or interest. We have previously reported that state insurance entities, as well as private sector insurers, have used catastrophe bonds to manage their risk. Stakeholders with whom we spoke or who participated in our roundtable discussion said that no matter which strategies Congress might choose for increasing private sector involvement in flood insurance, the federal government would likely still have some role. For example, some strategies include transforming the federal government’s role from primary provider of flood insurance to a residual insurer or a reinsurer. As the private sector increases its role in providing flood coverage, various government entities could collaborate on other important roles. In particular, stakeholders said that it would continue to be important for federal, state, and local governments to encourage mitigation through direct funding or other less costly strategies and to promote risk awareness among consumers. Stakeholders said that government entities want to reduce property owners’ current and future exposure to flood risk and increase the resiliency of their structures. Stakeholders also said that with private flood insurance, both the insurance company and the policyholder would have some incentive to mitigate, the insurer to reduce its risk exposure and the policyholder to lower the premium rate. More generally, stakeholders also said that various government entities should continue developing and enforcing building codes and land use agreements in order to reduce the flood risk of current structures and prevent future development in high-risk areas. Stakeholders also said that the federal government should ensure consistent insurance standards and regulations across states, in part through streamlined state regulations or federal oversight, to allow insurers to charge adequate rates. While a number of conditions are important to attract private sector participation in the flood insurance market, key among them is the ability to charge rates that fully reflect the estimated risk of flooding. The Biggert-Waters Act includes a number of provisions that begin moving NFIP toward full-risk rates for some properties, a critical first step. Delaying or repealing rate increases in the Biggert-Waters Act may address affordability concerns but would likely continue to increase NFIP’s long-term burden on taxpayers. Further, it may reinforce private insurers’ skepticism that they would ever be permitted to charge adequate rates and make their participation unlikely in the foreseeable future. As debates over the private sector’s role continue, one step to address the burden on low- and moderate-income policyholders could be taken immediately. As we have suggested previously, Congress could eliminate subsidized rates, charge full-risk rates to all policyholders, and appropriate funds for a direct means-based subsidy to eligible policyholders. The movement to full-risk rates would encourage private sector participation, and the explicit subsidy would address affordability concerns, raise awareness of the risks associated with living in harm’s way, and decrease costs to taxpayers, depending on the extent and amount of the subsidy. We provided a draft of this report to the Federal Emergency Management Agency (FEMA) and the Federal Insurance Office for their review and comment. FEMA provided technical comments, which we have incorporated into the report. We are sending copies of this report to the appropriate congressional committees, FEMA, and the Federal Insurance Office. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512-8678 or cackleya@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. The Biggert-Waters Flood Insurance Reform Act of 2012 (Biggert-Waters Act) mandated that GAO conduct a number of studies, including this study on assessing a broad range of options, methods, and strategies for privatizing the National Flood Insurance Program (NFIP). This report discusses (1) conditions needed for private sector involvement in flood insurance and (2) strategies for increasing it. To both identify conditions needed for private sector involvement in flood insurance and evaluate the benefits and challenges of strategies for increasing private sector involvement, we reviewed the laws, regulations, and history of NFIP, Federal Emergency Management Agency (FEMA) reports, academic studies, our prior work on NFIP, and other documentation and reports. We also held a roundtable in August 2013 composed of a variety of stakeholders to obtain their views on these issues. The 14 stakeholders participating in the roundtable included FEMA officials; state insurance regulators; a catastrophe modeling firm; an academic; and individuals representing associations of private insurers, reinsurers, actuaries, consumers, and floodplain managers. We supplemented information obtained through the roundtable with interviews with stakeholders representing the groups listed above as well as Federal Insurance Office officials; state residual insurance programs; and groups representing insurance adjusters, insurance agents, realtors, and mortgage bankers. We selected a diverse group of stakeholders for the roundtable and interviews based on type of organization, role, and membership. In addition, we identified some stakeholders based on work conducted from a prior report, and suggestions from other stakeholders. The roundtable discussion focused on four broad themes that included: a policy goals framework for evaluating options for increasing private sector involvement in flood insurance, barriers to private sector involvement that would need to be options for private sector involvement and associated benefits and challenges, and the governmental role that would remain in a private flood insurance market. We identified four public policy goals to evaluate options for changing the federal role in natural catastrophe insurance in a prior report. These goals are generally consistent with the evaluation criteria the stakeholders discussed. Roundtable participants and other stakeholders generally agreed that these goals were applicable for evaluating options for increasing private sector involvement in flood insurance. We conducted this performance audit from March 2013 to January 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Patrick Ward (Assistant Director); Emily Chalmers; Heather Chartier; William Chatlos; Christopher Forys; Patricia Moye; and Carrie Watkins made key contributions to this report. | NFIP has accrued $24 billion in debt, highlighting structural weaknesses in the program and increasing concerns about its burden on taxpayers. As a result, some have suggested shifting exposure to the private sector and eliminating subsidized premium rates, so individual property owners--not taxpayers--would pay for their risk of flood loss. NFIP was created, in part, because private insurers were unwilling to insure against flood damage, but new technologies and a better understanding of flood risks may have increased their willingness to offer flood coverage. The Biggert-Waters Flood Insurance Reform Act of 2012 moves NFIP toward charging more full-risk rates. It also mandates that GAO conduct a study on increasing private sector involvement in flood insurance. This report addresses (1) the conditions needed for private sector involvement in flood insurance and (2) strategies for increasing private sector involvement. To do this work, GAO reviewed available documentation and hosted a roundtable in August 2013 that included stakeholders from FEMA, the insurance and reinsurance industries, and state insurance regulators, among others. GAO also interviewed other similar stakeholders. According to stakeholders with whom GAO spoke, several conditions must be present to increase private sector involvement in the sale of flood insurance. First, insurers need to be able to accurately assess risk to determine premium rates. For example, stakeholders told GAO that access to National Flood Insurance Program (NFIP) policy and claims data and upcoming improvements in private sector computer modeling could enable them to better assess risk. Second, insurers need to be able to charge premium rates that reflect the full estimated risk of potential flood losses while still allowing the companies to make a profit, as well as be able to decide which applicants they will insure. However, stakeholders said that such rates might seem unaffordable to many homeowners. Third, insurers need sufficient consumer participation to properly manage and diversify their risk, but stakeholders said that many property owners do not buy flood insurance because they may have an inaccurate perception of their risk of flooding. Stakeholders identified several strategies that could help create conditions that would promote the sale of flood insurance by the private sector. For example, NFIP charging full-risk rates . Congress could eliminate subsidized rates, charge all policyholders full-risk rates, and appropriate funding for a direct means-based subsidy to some policyholders. Stakeholders said full-risk NFIP rates would encourage private sector participation because they would be much closer to the rates private insurers would need to charge. The explicit subsidy would address affordability concerns, increase transparency, and reduce taxpayer costs depending on the extent and amount of the subsidy. The Biggert-Waters Act eliminates some subsidized rates, but some have proposed delaying these rate increases. Doing so could address affordability concerns, but would also delay addressing NFIP's burden on taxpayers. NFIP providing residual insurance . The federal government could also encourage private sector involvement by providing coverage for the highest-risk properties that the private sector is unwilling to insure. Providing residual coverage could increase the program's exposure relative to the number of properties it insured, but NFIP would be insuring fewer properties, and charging adequate rates could reduce taxpayer costs. NFIP as reinsurer . Alternatively, the federal government could serve as a reinsurer, charging a premium for assuming the risk of catastrophic losses. However, the cost of reinsurance premiums would likely be passed on to consumers, with higher rates potentially decreasing consumer participation. Stakeholders identified other strategies including mandatory coverage requirements to ensure broad participation, NFIP purchasing reinsurance from the private sector rather than borrowing from the U.S. Treasury, and NFIP issuing catastrophe bonds to transfer risk to private investors. As the private sector increases its role in providing flood coverage, the federal government could collaborate with state and local governments to focus on other important roles, including promoting risk awareness among consumers, encouraging mitigation, enforcing building codes, overseeing land use agreements, and streamlining insurance regulations. While GAO makes no new recommendations in this report, GAO reiterates its previous suggestion from a June 2011 report ( GAO-11-297 ) that Congress consider eliminating subsidized rates, charge full-risk rates to all policyholders, and appropriate funds for premium assistance to eligible policyholders to address affordability issues. |
The federal government has been grappling with equal employment opportunity issues for over 3 decades. A number of laws and executive orders have been promulgated to end discrimination and promote affirmative employment within the federal government. The initial focus of the legislation was on ensuring fair employment practices and nondiscrimination. The attention to affirmative action as a means of addressing the historical underrepresentation of women and minorities in the federal workforce began in the 1960s. Two major pieces of legislation, the Equal Employment Opportunity Act of 1972 and the Civil Service Reform Act of 1978, provide the statutory basis for the establishment of affirmative employment and recruitment programs in the federal government. The Equal Employment Opportunity Commission (EEOC) has primary responsibility for providing federal agencies with guidance on their affirmative action programs and for monitoring and evaluating program implementation. Although affirmative action programs are currently the subject of some review and debate, these laws and programs remain in effect and guided the actions of the agencies we reviewed. This report is one in a series that we have prepared on the federal affirmative employment program for the former Chairman of the Senate Committee on Governmental Affairs. In May and October 1991, we reported on the federal affirmative employment planning guidance and the representation of women and minorities in the federal workforce. In March 1993 and July 1994, respectively, we reported on the progress of EEO groups in the key job workforces of large and small federal agencies. As agreed with the Committee, the objectives of this study were to determine the representation of women and minorities at the Departments of the Interior, Agriculture, Navy, and State and changes in the representation levels of these groups, particularly at the upper grade levels and in occupations that lead to those grades; evaluate whether the agencies’ affirmative employment program plans complied with EEOC’s instructions, particularly those that address factors affecting women and minority underrepresentation; and assess the adequacy of EEOC’s and OPM’s oversight of the affirmative employment and recruitment programs. As table 1 shows, the four agencies we were asked to review differed in size, and showed different changes in the numbers of permanent employees in their workforces between 1984, 1992, and 1994. The four agencies also differed in terms of the percentages of their total workforces that were in white-collar and in key white-collar jobs. EEOC defines key jobs as nonclerical occupations with 100 employees or more that are or can lead to middle and senior-level positions. In fiscal year 1992, 88.8 percent of Interior’s permanent employees were in white-collar jobs, and 33.7 percent in key white-collar jobs. 98 percent of Agriculture’s permanent employees were in white-collar jobs, and 52.8 percent in key white-collar jobs. 68.7 percent of Navy’s permanent employees were in white-collar jobs, and 14.8 percent in key white-collar jobs. 42.4 percent of State’s permanent employees were in white-collar jobs. (State’s Foreign Service workforce accounted for 56.7 percent of the agency’s total permanent employees). The analyses presented in chapter 2 address the total, white-collar, and key job workforces at each agency except at the State Department. At State, we examined the Foreign Service workforce in addition to the total and white-collar civil service workforces. To determine the representation status of women and minorities, we compared each agency’s workforce profile with the CLF profile to determine whether the agencies’ workforces were representative of the race, ethnic, and gender groups in the CLF. MD-714 and its predecessors instruct that agencies make this comparison for affirmative employment planning purposes. There are different approaches to determining the appropriate CLF for use in this analysis. The directives encourage agencies to use broad occupational categories—professional, administrative, technical, clerical, other, and blue collar (PATCOB). However, its instructions for the last affirmative planning cycle provided as an alternative the use of occupation specific data. Each approach, as we discussed in a previous report, has advantages and disadvantages. For example, PATCOB categories can be too general if an occupation being compared requires particular qualifications and educational levels. A disadvantage of the occupation specific data is that it may be difficult to find occupations in the CLF that precisely match the agencies’ occupations. For this report, we made two different comparisons against the CLF. First, we analyzed the agencies’ EEO profiles on an overall basis (i.e., all occupations combined) against the national CLF profile. This provided a broad overview of the standing of the different EEO groups in the agencies’ total and white-collar workforces. However, this comparison does not take into account the differences in the agencies’ occupational mixes and the occupational mix in the CLF. Second, we compared key white-collar occupations that agencies had identified in their affirmative employment plans against specific occupations in the national CLF. Our analyses covered 10 different EEO groups—white men and women, black men and women, Hispanic men and women, Asian men and women, and Native American men and women. To assess the degree of representation, we computed representation indexes for overall employment and for key jobs. These indexes were computed by dividing the percentage of each EEO group in each of the four agencies by the corresponding percentage of each EEO group in the CLF and multiplying the result by 100. The indexes can range from 0 to more than 100, with 100 indicating full representation and numbers less than 100 indicating underrepresentation. To the extent an index is much smaller than 100, the underrepresentation is correspondingly more severe. The federal workforce data we used came from OPM’s Central Personnel Data File (CPDF). Our analyses included full-time and part-time permanent employees. CPDF data comes from federal departments and agencies that report to it. We did not verify the accuracy of the CPDF data. Following EEOC’s guidelines, we used the 1990 decennial census CLF data compiled by the U.S. Census Bureau as the benchmark for calculating 1992 representation levels. The use of decennial census data for CLF comparisons is a common approach to measuring the representation of EEO groups in the federal government. However, we recognize that census data, like all other existing benchmarks, have strengths and weaknesses. Census-based CLF data are readily available by EEO group to do analyses of total employment and key jobs (e.g., civil engineers, computer specialists). However, the data become outdated with time and may require adjustments to compensate for undercounting of minorities. We did not make adjustments to the census data. EEOC, working with OPM, created a “crosswalk” that matches federal occupations with similar occupations in the decennial census CLF. The crosswalk does not always provide a perfect match between the federal and census occupations, but it is the closest readily available source for making comparisons. We used the crosswalked census occupations for our analysis of agency key jobs. We analyzed changes in representation levels of different EEO groups at two points in time—the end of fiscal years 1984 and 1992. Fiscal year 1984 was the most distant year for which we had complete data and 1992 the most recent data available. To analyze changes in representation over this period, we used a ratio-based approach. The ratio-based technique involves comparing ratios of numbers in differing EEO groups. To determine the change in representation levels between 1984 and 1992 for particular EEO groups, we divided the number of employees in the EEO group by the number of white men in each year and then took ratios of those numbers across the years. The term “relative number” used in this report refers to the number of women and minorities for every 100 white men. White men were selected as the benchmark because they dominated the agencies’ workforces in 1984 and 1992, especially at General Schedule (GS) grades 14, 15, and senior management levels. It seemed reasonable to consider how the numbers of women and minorities had changed over time relative to them. The ratio-based technique is especially useful in comparing relative changes in workforce representation across EEO groups of very different sizes and when the size and growth rates of the total employee population vary during the period studied. We also used this technique to analyze data on hires, separations, and promotions. These personnel events have an effect on the composition of the workforce, and the distribution of EEO groups across grade levels and analyses of these events may provide information to further explain representation trends. For our grade level analysis, we grouped the GS grades as follows: GS grades 1 through 10, 11 through 12, and 13 through 15. We converted the State Department Foreign Service grades to GS-equivalent grades using OPM’s guidelines. Our definition of senior management included employees in the SES, State Department’s Senior Foreign Service, and State Department’s Chief of Mission positions. To address our second objective—agency compliance with EEOC’s instructions—we reviewed relevant statutes, regulations, and EEOC directives. We examined MD-714 and supplemental memorandums issued by EEOC which contain the affirmative employment planning instructions applicable to the period covered in our review. We discussed the affirmative employment planning instructions with former and current officials from EEOC’s Office of Federal Operations in Washington, D.C., and EEOC’s Atlanta and Philadelphia District Offices who are responsible for reviewing and approving agencies’ affirmative employment plans. (The Atlanta and Philadelphia offices had oversight responsibilities over components of the Navy that we reviewed.) These officials described the factors they considered in reviewing and approving plans and provided us with compliance information for the four agencies we examined. We obtained examples of approval letters and other relevant documentation on the approval process. In addition, we independently reviewed the multiyear affirmative employment plans that the Departments of the Interior, Agriculture, Navy, and State prepared for fiscal years 1988 through 1992 and matched their contents against EEOC’s instructions. To determine whether agencies had analyzed each of the eight program elements as required by MD-714, we talked to agency officials about the affirmative employment planning process. We discussed the agencies’ multiyear plans with EEO and personnel specialists who described the analysis process and how the documents were prepared. We also asked for and reviewed documentation on the program analyses, comparing the analysis done to the guidance in MD-714. In addition, we interviewed agency supervisors, managers, SES members, and unit heads to document their roles and extent of involvement in affirmative employment planning and confirm whether certain required tasks were completed in the analyses. To assess the adequacy of EEOC’s and OPM’s oversight efforts, we reviewed 13 on-site evaluation reports of federal agencies or components that EEOC prepared between 1988 and 1992 to determine program coverage at each site. (EEOC reports included evaluations of the Departments of the Interior and the Navy.) We met with officials in EEOC’s Office of Federal Operations to discuss the methodology they used for on-site reviews and their monitoring of agency program implementation. We reviewed EEOC’s standard operating procedures for conducting on-site reviews and staff and budget information on the resources that EEOC has allocated to affirmative action planning since fiscal year 1988. In addition, we reviewed EEOC’s fiscal year 1990 Annual Report on the Employment of Women, Minorities, and People with Disabilities in the Federal Government. Likewise, we met with officials from OPM’s former Office of Affirmative Recruitment and Employment (now the Office of Diversity) and OPM’s Office of Agency Compliance and Evaluation (ACE) (recently merged into the Office of Merit Systems Oversight and Effectiveness) to discuss (1) OPM’s responsibilities in monitoring agencies’ affirmative recruitment programs; (2) the approach OPM uses to carry out its responsibilities, including the criteria used to evaluate agency FEORP plans; and (3) past and current activities to monitor and evaluate agency affirmative recruitment programs. In addition, we reviewed OPM FEORP reports to Congress for fiscal years 1990 to 1993. Our audit work was done from February 1992 to March 1995 in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the heads of Agriculture, EEOC, Interior, OPM, Navy, and State. The Chairman, EEOC; the Director, OPM; the Secretary of the Department of the Interior; and the Deputy Assistant Secretary for Defense (Equal Opportunity) provided written comments that are discussed in chapters 3 and 4 and reprinted in appendixes III through VI. State’s Director of EEO and a program specialist from Agriculture’s Office of Personnel provided oral comments. Federal agencies have been required, as a result of the Civil Rights Act of 1964, as amended by the Equal Employment Opportunity Act of 1972, to develop and implement affirmative employment programs to eliminate the historical underrepresentation of women and minorities in the workforce. To determine where underrepresentation exists, MD-714 (and its predecessor) provide that federal agencies compare the percentage of a particular minority/gender group in an occupation or job category with the percentage of that same group in the CLF. MD-714 (and its predecessor) further provide that when the federal employment percentage is less than the CLF percentage, underrepresentation exists and should be addressed in the agency’s affirmative employment plan. Our analysis of agency compliance with requirements for affirmative employment planning is discussed in chapter 3. We used two approaches to analyze agency workforce data to determine the representation of women and minorities in the workforce. The first approach involved the use of a ratio-based technique to estimate the relative numbers of women and minorities in the agencies and also the numbers involved in certain personnel events in each year. The technique, which involves comparing ratios of numbers in differing occupational categories, grade levels, or EEO groups, enabled us to perform analyses that are useful for depicting the direction and magnitude of changes over time, and they are especially well suited to comparing the relative changes in workforce representation across groups of very different sizes. The second approach required comparisons to CLF data, a benchmark external to the agencies. To determine representation levels, we computed representation indexes using agency workforce data and national CLF data from the 1990 census. The indexes indicate the extent to which an EEO group is represented in a workforce as compared to that group’s representation in the CLF. The index can range from 0 to 100 plus, with 100 indicating full representation and lower numbers indicating underrepresentation. Generally, we found that the Departments of the Interior, Agriculture, Navy, and State made progress towards improving the EEO composition of their workforces. The relative numbers of white women and minorities in the agencies’ workforces increased between 1984 and 1992. Moreover, the relative number of women and minorities in the agencies’ key white-collar jobs increased across all grade levels between 1984 and 1992. Also, the agencies hired and promoted women and minorities into key white-collar jobs in relative numbers that generally equalled or exceeded their relative numbers employed over the period reviewed. However, white and minority women in all agencies and minority men at Interior in 1992 separated at higher rates than white men. Underrepresentation of women and minorities—especially in key jobs—remained in these agencies. White women in all the agencies and minority women at Agriculture were underrepresented on an overall basis in the total and white-collar workforces in fiscal year 1992 when compared to the national CLF. Most EEO groups continued to be underrepresented in key white-collar jobs in relation to their representation in similar occupations in the CLF. Appendix I provides more data on the results of our analyses. The following sections focus on the relative changes in women and minority representation overall and in the agencies’ key jobs by grade level. In this section, we analyze overall changes in the numbers of women and minorities relative to the numbers of white men. This approach involves comparing ratios of employment numbers for differing EEO groups between 1984 and 1992. Figures 2.1 through 2.3 show that, in virtually all workforces at each agency, the numbers of white women and minorities employed increased relative to the number of white men. The increases were generally larger for white and minority women than for minority men. The relative numbers in these figures indicate, in each year, the number of white women, minority men, and minority women there were for every 100 white men. These relative numbers were calculated by dividing the number of employees in each protected EEO group by the number of white men, and multiplying by 100. Notwithstanding the increases in relative numbers, in both fiscal year 1984 and fiscal year 1992 white women and minorities were represented in lower relative numbers in the agencies’ key white-collar occupations and in the Department of State’s Foreign Service workforce than in the agencies’ total workforces. As seen in figures 2.1 through 2.3, this condition appears somewhat more pronounced for white and minority women than for minority men. We divided the relative number for the latest fiscal year (1992) by the relative number for the beginning fiscal year (1984) to express the amount of change that had occurred. A resulting ratio of 1.0 indicates no change in percentage or relative number; ratios greater than 1.0 indicate an increase in percentages or relative numbers, while numbers less than 1.0 indicate a decrease. Tables 2.1, 2.2, and 2.3 display these results. How much progress have the agencies made in improving the standing of women and minorities in their key job grade structure between fiscal years 1984 and 1992? The relative number of women and minorities in key white-collar jobs at Interior, Agriculture, and Navy, and in the State Department Foreign Service increased across all GS grades (i.e., GS grades 1 through 10, 11 through 12, and 13 through 15) over the period we reviewed. Women and minorities also made strides in the agencies’ SES ranks and in State’s Foreign Service top positions—Senior Foreign Service Officers and Chiefs of Mission—between fiscal years 1984 and 1992. However, as figures 2.4 through 2.7 and tables 2.4 through 2.7 show as of fiscal year 1992, women and minorities were still less well represented in the agencies’ middle and senior management levels (grades 13 and above) than in the lower levels of the agencies’ hierarchies. The relative numbers of white women and minorities at Interior, Agriculture, and Navy increased at every grade level. Increases in relative numbers were, at grade 15 and below in these three agencies, generally larger for white and minority women than for minority men. The only exception was for grades 1 through 10 at Navy, where the increase in the relative number of minority men was greater than that for white women. Among State’s Foreign Service employees, only white women increased in representation at all three grade levels. The percentage of minority men increased at grades 13 through 15 but decreased at grades 11 and 12, while the percentage of minority women increased at grades 1 through 10 and 13 through 15 but decreased at grades 11 and 12. The percentage of white men in the Foreign Service workforce decreased at all three grade levels. The relative numbers of white women and minority men either increased or, in the case of minority men and women at grades 11 to 12, remained virtually the same. In general, the relative numbers of white women and minorities in the SES and in the Department of State’s top Foreign Service positions—Senior Foreign Service Officers and Chiefs of Mission—increased between 1984 and 1992. The exception was minority men in State’s SES and Chiefs of Mission. (See table 2.8.) The size of the increases varied by agency and group. White women experienced the greatest gains in the SES level at all agencies except at Interior, where minority women showed the highest rate. However, as table 2.8 shows, white men continued to dominate the higher ranks of the agencies reviewed, accounting for 75 percent or more of the agencies’ top positions in 1992. We compared the EEO profiles of the four agencies’ workforces as of September 1992 with the EEO profile of the nation’s CLF in 1990 to determine if the agencies’ workforces were representative of the CLF. Using an index where less than 100 indicates underrepresentation, we found that certain EEO groups were often underrepresented on an overall basis (all occupations combined) and in key jobs in 1992 when compared to the CLF. The extent of underrepresentation, as discussed below, varied by agency and EEO group. White women in all four agencies, minority men at Agriculture and State, and minority women at Agriculture and Navy, were underrepresented in the total workforces of these agencies in 1992 when compared to 1990 CLF data. In the white-collar workforce, white women were underrepresented in the four agencies reviewed, while minority women were underrepresented only at Agriculture. The other groups were fully represented in both the total and white-collar workforces. See table 2.9 and figure 2.7. Our analysis of 49 key white-collar jobs (18 at Agriculture, 17 at Interior, and 14 at Navy) showed that women and minorities were underrepresented in many of the key jobs that we reviewed at these three agencies in relation to their representation in the CLF for those same occupations. Table 2.9 shows that white women, blacks, Hispanics, and Asian women were the groups most often severely underrepresented at the agencies reviewed. Thus far, we have analyzed changes in the percentages and relative numbers of women and minorities employed in the agencies, as of the end of fiscal years 1984 and 1992. Also for 1992, we compared agency workforces with the 1990 CLF. To better understand the agencies’ efforts to diversify their workforces, it is important to examine the personnel actions that bring employees into and out of the agencies’ workforces, and identify their advancement in the workforces at any point during those 2 years. This section focuses on some of these actions: hires, separations, and promotions. (These terms, as used in this report, are defined and more data on the results of our analyses are included in app. II.) Overall, agencies hired and promoted women and minorities at rates that would increase their share of the agencies’ workforces, but separation rates for certain EEO groups were high. This higher rate of separations limited the agencies’ overall progress in achieving a representative workforce. In general, all four agencies hired women and minorities into their key white-collar occupations or, at State, the Foreign Service workforce, in percentages and relative numbers that exceeded the percentages and relative numbers at which they were employed in fiscal years 1984 and 1992. (See tables II.1 through II.4.) For example, as table II.1 shows, Interior hired 43 white women for every 100 white men hired in fiscal year 1992 into the key white-collar workforce, when it had 26 white women employed per 100 white men. It hired 16 minority men for every 100 white men hired in fiscal year 1992 when there were 14 minority men per 100 white men in the workforce. In other words, white women and minority men at Interior were hired at rates that would (disregarding separations) have increased their relative numbers in the workforce. As tables II.1 through II.4 show, the exceptions in fiscal year 1984 were minority women at Interior and minority men and women at Agriculture, who were hired in key white-collar jobs in lower relative numbers than those at which they were employed. Similarly, in fiscal year 1992, the relative numbers of women and minorities who were hired in State’s Foreign Service did not exceed the relative numbers employed. As tables II.1 through II.4 also show, the relative numbers at which the agencies hired women and minorities were generally greater than the relative numbers at which members of these groups were separated from the agencies. This was true in fiscal year 1992 for women and minorities at all four agencies, except for white women at State. In 1984 the exceptions were minority women at Interior and minority men at Agriculture. However, tables II.1 through II.4 show that there were many instances in which the separation rates exceeded the rates at which women and minorities were employed. High separation rates for white women were apparent in all agencies except at State in fiscal year 1984, and in all four agencies in 1992. For example, 49 white women per 100 white men separated from Agriculture in fiscal year 1992, when there were 41 white women employed per 100 white men. The separation rates for minority women were high in fiscal year 1992 at Interior, Agriculture, and Navy. Interior was the only agency in which the relative number of minority men separating from key white-collar jobs exceeded the relative number employed in fiscal year 1992. These situations signal a pattern that if continued would be detrimental to continued progress to achieve a representative workforce. Promotions do not add or subtract from the workforce population, but can affect the distribution of different groups across the agencies’ grade structure. In fact, because considerably larger segments of the workforces were promoted in a given year than were hired or separated, promotions have the potential to make a considerably greater impact on the distribution of women and minorities than do either hires or separations. Our analysis showed that, in all four agencies, the relative numbers of white women and minority men and women promoted were greater than the relative numbers employed in key white-collar or Foreign Service jobs both in fiscal years 1984 and 1992. The only specific EEO groups with lower promotion rates than employment rates in fiscal year 1992 were Asian men at Interior and State and Native American men at Navy and State. In general, the four agencies we reviewed increased their employment of women and minorities between fiscal years 1984 and 1992. Even in those workforces in which the percentages of white women and minority men declined, the decreases were usually less than those of white men. Consequently, in almost all cases, the number of women and minorities increased relative to the numbers of white men. In fiscal year 1992, women and minorities (1) were represented in lower relative numbers in the agencies’ key jobs and in State’s Foreign Service jobs than in the agencies’ total workforces, (2) were often underrepresented when compared to the CLF, and (3) remained less well represented in higher grades than in lower grades. For the most part, women and minorities in the agencies reviewed experienced favorable hiring and promotion rates in fiscal years 1984 and 1992, which contributed to the increases in their employment numbers. That is, agencies hired and promoted women and minorities at rates that often exceeded their relative numbers employed. However, in three agencies (all except State), white and minority women were separated in relative numbers that exceeded the relative numbers at which they were employed in 1992. This was true also of minority men at Interior. These conditions limited agencies’ progress in diversifying their workforces. EEOC instructions provide that agencies should analyze their workforces to identify representation problems, causes, and actions needed to address them. The next chapter discusses how well agencies’ affirmative employment planning efforts followed EEOC instructions. The affirmative employment planning program analyses that the Departments of the Interior, Agriculture, Navy, and State undertook for fiscal years 1988 through 1992 reporting cycle did not completely address all eight program elements included in EEOC’s MD-714 planning and reporting instructions. Several factors contributed to this condition. The agencies often lacked the data necessary to identify problems. According to agency EEO officials, senior managers were rarely involved in affirmative employment planning and saw the preparation of plans as something someone else (e.g., the EEO Director) was supposed to accomplish. Agencies’ affirmative employment planning program analyses efforts did not adhere to EEOC’s MD-714 directive in several ways. The agencies did not include the complete program analyses MD-714 instructs them to do to identify the fundamental causes of underrepresentation. In addition, those agencies that established numerical goals for improving EEO representation failed to relate them to specific underrepresentation problems as EEOC instructions provide. Under MD-714, the first step an agency should take to develop an affirmative employment multiyear plan is to do a comprehensive program analysis of eight program elements: workforce composition, recruitment and hiring, employee development, promotions, separations, discrimination complaints, organization and resources, and program evaluation. According to the MD-714, after conducting a program analysis of the affirmative employment program within the agency, problems and barriers shall then be identified. According to an EEOC memorandum on affirmative employment planning, agencies should maintain documentation to support their identification of barriers and development of objectives. None of the agency program analyses we reviewed fully addressed the eight program elements. Interior fully analyzed only one of the eight program elements; Agriculture, three; State, three; and Navy, two. None of the four agencies fully addressed four of eight program elements (recruitment and hiring, promotions, separations, and program evaluation). Handling discrimination complaints was the only program element that all four agencies fully analyzed. For example, the workforce composition component of Interior’s analyses did not address EEO representation levels by key jobs as required by MD-714. In addition, Interior combined all the women and minority groups in its grade level analysis. A breakdown of grade level data by EEO group is called for by MD-714. A breakdown by EEO group is particularly important at Interior because of its high concentration of Native Americans and underrepresentation of other EEO groups. An official from Interior’s Office of Equal Opportunity said that analyzing workforce data by key jobs and grade requires significant manual effort. He added that the department lacks the computer capability and staff resources to conduct detailed analyses. Only one of the four agencies’ analysis addressed all the relevant information on employee development programs. For example, two key training questions listed in MD-714 and not addressed in the agencies’ analyses were: “Has a survey of current skills and training of the agency’s workforce been conducted to determine the availability of employees from the EEO Groups, having skills required to meet agency staffing needs?” “Have studies been conducted on time-in-grade to determine the reasons for any differentials which may exist by minority status and sex?” EEOC stated that the program analysis questions in MD-714 are considered as guidance and not specific requirements. However, EEOC’s memorandum on federal affirmative employment planning dated January 21, 1988, suggests otherwise. The memorandum states that “The program analysis is the foundation upon which the agency’s entire plan will be based. Therefore, each agency should ensure that it performs a comprehensive assessment of how the agency’s efforts are directed toward the eight major program elements. The analysis must provide complete rationale for responses to the questions that follow each element. It is not necessary that the analysis be limited to just those questions.” The memorandum also states that agencies should maintain documentation which supports the agency’s identification of barriers and development of objectives. However, agency officials from two of the agencies we reviewed also said that they considered the questions in MD-714 as guidance rather than requirements that must be met. Agency officials also said that EEOC did not always ask agencies to provide comprehensive answers to the program analysis questions when it reviewed their plans. Another reason for the incomplete analysis of the program elements is that agencies did not fully analyze personnel event data (e.g., data on recruitment, hires, training, promotions, and separations). We discuss this issue later within this chapter. In prior reports we have recommended that EEOC expand the agency workforce analysis requirements to include (1) major occupation workforce data by grade level or grade groupings, and (2) analysis of hiring, training and development, promotion, and separation data. We believe that these additional analyses are critical to fully understanding the causes for trends in underrepresentation and overcoming barriers to achieving a representative workforce. We also have recommended that EEOC provide agencies with better guidance on what constitutes a major occupation and additional guidance on what to analyze. EEOC agreed with these recommendations and has addressed them in its proposed new management directive. MD-714 provides that agencies should examine their personnel and management policies, practices, and procedures to determine whether they limit or act as barriers to the representative employment of women and minorities. MD-714 instructs agencies to identify barriers in their multiyear affirmative employment plans and to provide narrative describing the barriers. While the agency plans we reviewed often acknowledged that agencies had made some progress in the areas of recruitment, hiring, and promotion of EEO groups, none included any explanation of the fundamental causes of underrepresentation where it existed. The State Department initiated studies to validate its procedures for examining and hiring Foreign Service employees partly in response to our 1989 report. Our report recommended, among other things, that the Secretary of State analyze personnel processes to determine (1) whether the Foreign Service written examination was a valid predictor of success, (2) why minorities and women were eliminated at a higher rate than white men by the final review panel process, and (3) why women and minorities were disparately assigned to certain functional work areas. The State Department has taken steps to address these first two recommendations. State’s multiyear plan acknowledged that the Foreign Service written exam had adversely affected EEO groups. According to the Director, Office of Recruitment, Examination, and Employment, the State Department is validating the requirements of Foreign Service positions and correlating them with the test used to determine whether revisions are needed. The Director said that the Uniform Guidelines on Employee Selection Procedures do not require that the agency automatically discard or change the exam; they only require that State determine whether the exam is a valid indicator of job performance. According to State officials, they have implemented, in 1994, a new system for assigning functional work areas which addresses the allegations of disproportionate assignment of women and minorities to certain areas. The affirmative employment plans we reviewed generally acknowledged that the agencies lacked information on employee skills and training. With the exception of the State Department, the plans did not say whether or not procedures were in place to ensure appropriate training opportunities were available to all employees. For example, the State Department’s multiyear affirmative employment plan stated that the agency lacked sufficient managerial and supervisory emphasis on the use of career training and employee development counseling opportunities. State’s plan also said that some supervisors do not have enough time to provide adequate career counseling due to performance of regular duties and many supervisors and employees were unaware of career ladders and the training needed to encourage upward movement. State’s plan listed specific actions to address these barriers, such as establishing mandatory EEO/supervisory training for supervisory personnel and a mentor program to provide additional career development information. Navy’s multiyear affirmative employment plan acknowledged the underrepresentation of women and minorities in engineering positions and cited that insufficient numbers are applying, but offered no explanation on the root causes of this problem. The agencies’ plans that we reviewed acknowledged the lack of data and analyses to identify barriers to promotion or entry into senior management positions. Finally, the agencies’ plans contained little if any discussion of the reasons employees separated from the agencies and whether institutional policies affected the retention of women and minorities. The section labelled “barriers” in the agencies’ plans dealt primarily with administrative program management issues, such as the need to provide managers with EEO awareness training and the need for EEO data collection and evaluation systems. While these are important aspects of the affirmative employment program, none of the multiyear plans focused on the root causes of underrepresentation or the specific remedies required to correct the problem. Agency personnel and EEO specialists at three of the four agencies we reviewed told us that the affirmative employment plans were deficient because they were treated as a paperwork requirement instead of as plans of action to be taken seriously by the agencies’ managers. Officials at the other agency we reviewed attributed the multiyear plan’s limitations mainly to data limitations. While the multiyear plans offered little information on the underlying causes of underrepresentation, our interviews with senior managers and EEO and personnel staff at the four agencies disclosed a number of barriers they said limited representative employment. At the Departments of the Interior, Agriculture, and Navy, these included: senior managers’ apathy to their units’ affirmative employment goals and selecting officials’ stereotyped thinking (e.g., the beliefs that women do not want to travel on their jobs or cannot meet the physical work requirements of traditionally “men only” jobs); and absence of penalties for managers and supervisors who fail to maintain an environment free of discrimination. EEOC identified similar barriers and negative attitudes towards women and minorities in its 1990 on-site reviews of Interior’s and Navy’s affirmative employment programs. For example, EEOC’s report cited an interview with one senior manager who said that “minorities are not willing to reinvest their time and money into their careers.” This manager also said that “whites have the credentials and are more qualified than the minority applicants.” Regarding barriers to the entry of women and minorities into the Foreign Service, the former Director of the Office of Recruitment, Examination, and Employment at the State Department told us that women and minorities generally had not considered the Foreign Service as a career option early in their school training and thus frequently had not pursued the academic curriculum necessary to successfully complete the Foreign Service examination. This official said that the State Department was trying to address this barrier by providing more information to applicants on how to prepare for the Foreign Service exam. The State Department—which until recently had not extensively recruited women and minorities at the college level—also recognizes the need to increase recruiting efforts. While the establishment of numerical goals as an aid for achieving full representation is discretionary under MD-714, EEOC officials have said that such goal setting is one of a number of valuable management tools and a reflection of management’s commitment to overcoming underrepresentation. Goal setting also provides measurable objectives for managers when recruiting, hiring, and promoting staff. MD-714 states that numerical goals, when used, should have a reasonable relation to the extent of underrepresentation in the agencies’ workforces, the number of vacancies, and the availability of candidates. Three of the four agencies we reviewed established numerical goals in their multiyear affirmative employment plans as a means of improving the representation of women and minorities in their workforces. The Department of the Interior did not do so, although some of its agencies, such as the Fish and Wildlife Service, did establish numerical goals. The numerical goals that Agriculture and Navy established may have been misdirected because they were not based on the degree of underrepresentation of EEO groups in job categories and major occupations as MD-714 provides. For example, EEOC noted that Agriculture had set overall goals for women or minorities rather than for the specific EEO groups that were underrepresented. EEOC also found that Agriculture set numerical goals in occupational series that had no representation problems. In contrast, Agriculture established no numerical goals for certain EEO groups (e.g., Hispanics) that its affirmative employment plan identified as being severely underrepresented. Navy identified severe underrepresentation of women and minorities in science and engineering positions in its 1988 multiyear plan, but did not establish specific goals for increasing the number of women and minorities in these occupations until fiscal year 1993. Furthermore, while Navy’s 1988 multiyear plan established a departmentwide goal of increasing the employment of Hispanics by 5 percent, it did not outline specific actions needed to achieve this goal also until fiscal year 1993. In its 1990 report of Navy’s program, EEOC stated that it found no evidence that Navy was aggressively recruiting Hispanics. EEOC also said that Navy’s goal for increasing Hispanic representation was below the Hispanic representation in the CLF. Navy’s fiscal year 1992 accomplishment report and 1993 affirmative employment update indicate that Navy is beginning to plan activities to recruit and employ Hispanics (e.g., increased participation of Hispanics in cooperative programs and Junior Fellowship programs). The State Department has established numerical hiring goals for EEO groups in its Foreign Service and Civil Service. However, its multiyear plan did not include goals for the advancement of women and minorities into senior-level positions. Adequate, reliable data with which to identify EEO problems and their causes are clearly essential to building affirmative employment plans. The agencies we reviewed were unable to adequately analyze the barriers to the representative employment of women and minorities because for the most part they lacked the requisite data on recruitment, hiring, training, job assignments, promotions, and separations. Recruitment data, or applicant flow data as they are commonly known, refer to the gender, race, and ethnic origin of job applicants. None of the agencies we reviewed gathered applicant flow data on an agencywide basis. Applicant flow data are needed to determine whether an agency’s recruiting efforts are generating sufficient numbers of women and minority applicants. Hiring data accounts for the number of persons selected for the positions available. Agency officials said they lacked the data partly because they are unclear about EEOC’s requirements for collecting and analyzing personnel event data. We found that while the Uniform Guidelines require that agencies maintain data on recruitment, hiring, training and development, job assignments, promotions, and separations, MD-714 does not require that these data be collected, analyzed, and reported in the affirmative employment plans. Recognizing the importance of recruitment, hiring, promotion, and separations data, EEOC is revising its affirmative employment planning instructions to require agencies to collect, analyze, and report this information in the next affirmative employment planning cycle. Agencies also face practical difficulties in obtaining personnel event data. For example, EEO and personnel specialists we interviewed generally said that they lacked the computer capability to gather and analyze agencywide data on applicant flow, training, employee development, and separations. Developing the computer capability is an issue of priority that each agency has to examine itself since it takes time and money. Collecting applicant flow data has been a problem because agencies must get approval from appropriate sources for the use of a form designed to collect such data. As discussed in our October 1991 testimony, agencies no longer have a governmentwide form for gathering applicant flow data because OPM’s authorization for the use of a form specifically designed for that purpose expired in December 1983. In 1989, EEOC proposed a directive that would have required agencies to collect the data, but, at OPM’s request, did not issue the proposed directive. OPM made the request because at that time it was considering collecting these data governmentwide as part of its new effort to automate its hiring process. We recommended in October 1991 that OPM act in cooperation with EEOC to examine options for collecting and analyzing applicant flow data and take prompt appropriate action. In August 1994, an OPM official from the Office of the Director told us that OPM was still discussing with EEOC the alternatives for collecting the data. OPM also told us that it has discussed with EEOC the costs of developing an applicant flow system and that OPM will not proceed without EEOC’s support. However, in June 1995, the Director, OPM, stated that the agency was opposed to collecting applicant flow data because collecting this data is burdensome, ineffective, and costly. OPM also stated that agencies should be held accountable for their selections and not be allowed to use the composition of applicant pools as an excuse to deflect accountability from deciding officials. In July 1995, the Chairman, EEOC, disagreed with the Director of OPM’s views about the need for and collection of applicant flow data. The Chairman said that collection of applicant flow data is necessary to hold agency officials accountable. He also said collection of applicant flow data is required by regulation that is binding on both public and private sector employees. While EEOC’s draft revised management directive requires agencies to collect applicant flow data, EEOC still has not developed procedural guidance for collecting the data. If agencies continue to face difficulties in getting approval for the use of a form to collect applicant flow data, they may not be able to comply with EEOC’s proposed directive. MD-714 provides that a management team consisting of line management officials, EEO staff, personnel staff, and heads of other pertinent offices should meet to review and identify the agency policies, practices, and procedures that cause underrepresentation problems. However, the personnel and EEO officials and line managers we talked to said that their agencies’ affirmative employment multiyear plans and annual updates were prepared by personnel and/or EEO office staff at the departmental level with little or no input from line managers and senior officials.According to the officials we interviewed, line managers and senior officials with authority to make personnel decisions regarding employment, job assignments, training, promotions, and terminations were rarely involved in the process of identifying barriers and actions to improve the representation of women and minorities in their agencies. The agency officials we talked to also said that line managers and senior officials’ involvement, when it occurred, was limited to providing data or cursory review of draft plans prepared by the EEO or personnel staff offices. Our review of the agencies’ affirmative employment multiyear plans showed that senior officials and managers were not made responsible for implementing planned affirmative employment actions. For example, Interior’s multiyear affirmative employment plan assigned the responsibility for implementing the action items identified in the plan to the Offices of Equal Opportunity and/or Personnel. Senior officials, line managers, and supervisors were given no affirmative action tasks to carry out. EEO staff we talked to at Interior, Agriculture, and Navy characterized the affirmative employment plans as “administrative tasks” or “paper exercises” done to fulfill EEOC’s requirement that agencies submit a plan. They said that senior officials and line managers did not actively participate in preparing the plans. Navy and Interior EEO officials told us that senior officials and line managers in their agencies did not see affirmative employment as one of their key responsibilities because they were not held accountable for planning and carrying out affirmative action. Agency heads have been required for many years, by law and regulation, to establish programs to end discrimination and to promote affirmative employment. Accountability suggests that goals will be established, performance will be measured and reported, and that this information in turn will be used to monitor progress towards achieving the agencies’ EEO objectives. However, at present no formal mechanisms are in place to evaluate agency heads on the results of their agencies’ EEO/affirmative employment programs. The National Performance Review (NPR) recognized a need to hold federal top managers accountable for EEO/affirmative employment program outcomes and identified ways to address these needs. Specifically, the NPR called for charging “all federal agency heads with the responsibility for ensuring equal opportunity and increasing representation of qualified women, minorities, and persons with disabilities into all levels and job categories, including middle and senior management positions.” The NPR recommended, among other things, that the President mandate through an Executive Order that each agency head build EEO and affirmative employment elements “into the agency’s strategic business plan and include effective measurements for impact and change.” A draft of the Executive Order aimed at addressing this recommendation was under review in August 1995. Federal agencies may or may not have formal organizational strategic plans. However, the Government Performance and Results Act (GPRA) of 1993 requires that by September 30, 1997, the head of each agency submit to the Director, Office of Management and Budget (OMB) and to Congress a strategic plan containing a statement of goals and objectives, including outcome-related goals for the agency’s major functions and operations. The plan should also contain a description of the program evaluations used in establishing or revising general goals and objectives. This long-term strategic plan provides a framework for integrating human resources management issues—of which EEO and affirmative employment are a part—into the agencies’ organizational plans and strategies. It provides the basis for holding agency heads accountable for human resource management effectiveness. It is unknown how the current reexamination of federal affirmative action programs will affect the administration’s plans for holding agency heads accountable for results in EEO/affirmative employment programs. The multiyear affirmative employment planning program analyses we reviewed did not adhere to all of EEOC’s instructions. The planning analysis did not fully analyze program elements such as recruitment and hiring, promotions, employee development, and separations. Agency officials told us they did not collect personnel event data and analyze it as part of the process of identifying barriers to EEO, in part because they did not consider this to be a planning requirement. EEOC has not clearly stated what data and analyses the multiyear plans should contain and focused agencies’ attention on identifying the causes of underrepresentation problems. We have made a number of recommendations to EEOC in past reports for improving the guidance it provides to agencies. EEOC’s proposed management directive incorporates many of our past recommendations and, if implemented, would clarify agency affirmative employment responsibilities. Finally, the agency EEO officials we talked to said that senior officials and senior managers had little involvement in formulating their agencies’ multiyear affirmative employment plans and annual updates. Our review of these plans showed that the plans assigned them no specific affirmative employment responsibilities. Management participation in multiyear plan development and execution is a part of the affirmative employment planning process outlined in MD-714. While agency heads are responsible by law for implementing programs to eliminate the underrepresentation of women and minorities in the workforce, no formal mechanism is currently in place to hold them directly accountable for the success of those programs. The strategic plans required by GPRA provide a framework for integrating human resources management with agency business plans and strategies. These plans provide a vehicle for including affirmative goals and objectives in organizational plans and ultimately holding top managers accountable for EEO results. However, the strategic plans are not required until 1997. One way being considered to expedite this process is through the NPR recommendation that the President mandate through an Executive Order that each agency head build EEO and affirmative employment elements into his or her agency’s strategic business plans. It is unknown how the current reexamination of federal affirmative action programs will affect the administration’s plans for holding agency heads accountable for results in EEO/affirmative employment programs. OPM and EEOC did not provide sufficient oversight to ensure that agencies’ affirmative recruitment and employment programs were effectively correcting imbalances in their workforces. We found, for example, that OPM did not apply all the requirements set forth in regulations when reviewing FEORP plans. Moreover, while OPM increased the number of its on-site reviews in fiscal year 1993, these reviews provided only limited information on the success of agencies’ recruitment efforts. While EEOC’s on-site reviews addressed substantive issues, these reviews, prior to June 1993, were limited in number. According to EEOC officials, they revised their evaluation approach as of June 1993 to increase their frequency and number. 5 CFR 720.205 requires that an agency’s FEORP plan include: (1) annual determinations of underrepresented EEO groups and indexes for measuring progress in eliminating underrepresentation; (2) listings of occupational categories suitable for external and internal recruitment; (3) descriptions of recruitment programs established to increase women and minority candidates from internal and external sources; (4) descriptions of methods the agency intends to use to identify and develop women and minority candidates from each underrepresented group; (5) an indication of how these methods differ from and expand upon prior agency efforts; (6) the expected number of job vacancies to be filled in the current year and future years by grade or job category; (7) identification of knowledge, skills, and abilities that can be obtained at lower grade levels in the same or similar occupational series to prepare candidates from underrepresented EEO groups for higher job progression; (8) descriptions of planned efforts to identify jobs that can be redesigned to improve opportunities for women and minorities; and (9) priority listings for special recruitment activities. OPM did not use all of these requirements when reviewing agency affirmative recruitment plans. Officials from OPM’s former Office of Recruitment and Employment told us OPM considered a plan to be adequate if it (1) identified recruitment priorities by targeted groups, grade levels, and occupations; (2) described recruitment methods and sources; and (3) provided target dates for accomplishing recruitment activities. According to these officials, this information, along with the agencies’ accomplishment reports and OPM trend data on agencies’ employment profile, is sufficient for them to evaluate agencies’ FEORP activities. We reviewed the yearly FEORP plans prepared by Interior, Agriculture, Navy, and State for fiscal years 1991 through 1993. These plans generally lacked information required in CFR 720.205. Specifically, the plans did not address items 5 through 8 listed above. These requirements were developed because they would contribute to a strong affirmative recruitment program. OPM increased its on-site FEORP program evaluations from an average of 5 on-site reviews per year over fiscal years 1989-1992 to 27 on-site reviews in fiscal year 1993, reaching its goal of reviewing at least one-third of the agencies covered by FEORP. According to OPM officials, the on-site reviews were not designed to set expectations or evaluate an agency’s progress in terms of recruiting numbers. Rather, their purpose was to provide agencies with information about OPM activities, answer questions, and suggest ways of improving the agencies’ affirmative recruitment programs. OPM officials said that OPM has used a “non-threatening” approach to administering the FEORP program. OPM officials stressed that EEOC bears the primary oversight responsibility for affirmative recruitment and employment and that OPM’s primary role is to provide technical assistance to help agencies develop innovative programs that will correct imbalances in their workforces. In 1990, at the request of the Office of Affirmative Recruiting and Employment, OPM’s Office of Agency Compliance and Evaluation (ACE) reviewed the FEORP program. ACE’s review covered agency FEORP activities at 185 major installations employing about 316,000 civilian employees.ACE’s review findings were similar to those included in the on-site reviews performed by the Office of Affirmative Recruiting and Employment—namely, that agencies were involved in a variety of efforts to increase the identification and outreach of women and minorities. However, ACE’s review also revealed that half of the installations-level personnel at these agencies were not familiar with their agencies’ FEORP plans and that installation personnel did not see connections between FEORP plans and affirmative employment program plans. One of OPM’s functions under FEORP is to help increase the number of women and minorities in applicant pools, at all grade levels. In principle, increased representation of women and minorities in applicant pools should eventually result in more hiring from these groups. However, OPM evaluations have not specifically examined the extent to which agency recruitment efforts have indeed increased the number of women and minorities in their applicant pools. OPM is responsible (under 5 CFR 720.203) for assisting agencies in determining whether applicant pools used in filling jobs in a category of employment where underrepresentation exists include sufficient candidates from any underrepresented groups. As discussed in chapter 3, neither the agencies nor OPM collect applicant pool data. Without these data, agencies and OPM cannot measure the effectiveness of affirmative recruitment efforts. According to officials in EEOC’s Office of Federal Operations, EEOC is responsible for overseeing about 121 federal agencies and more than 900 field installations. EEOC’s standard operating procedures for conducting on-site reviews, issued in 1990, stated that EEOC would target 23 agencies for review during the multiyear planning cycle, and the remaining agencies on a case-by-case basis. EEOC had completed 14 of the scheduled 23 on-site reviews between 1988 and June 1993. EEOC officials from the Office of Federal Sector Programs said EEOC had revised the scoping approach and, if its budget allowed, would be able to do more reviews each year. Subsequently, EEOC officials informed us that as of June 1995, the agency had completed 36 more on-site reviews. The officials explained that, while considerable staff resources and time were used in the past to examine a relatively small number of large complex departments such as Navy and Interior, EEOC’s revised approach focuses on components of large departments and small agencies. EEOC expects to reach a 60-day goal for completing an on-site review which, if achieved, would allow for more reviews in a given year. EEOC officials also said that with additional experience in conducting on-site reviews, EEOC will more likely schedule large and complex agencies for review. The Director of Affirmative Employment, Federal Programs, also said that his staff spends most of its time reviewing annual updates and accomplishment reports and providing written responses to the agencies, and less time on evaluating the effectiveness of the programs. EEOC, like other agencies, has faced the challenge of meeting expanded oversight responsibilities with limited staff resources. At the end of fiscal year 1993, EEOC had 36 employees monitoring the affirmative employment programs of 121 agencies and 900 field offices. EEOC officials from the Office of Federal Sector Programs said that their staffing levels have remained virtually unchanged since 1988. EEOC’s on-site reviews have addressed significant program issues. In addition to analyzing the changes in the employment and advancement of women and minorities, EEOC examined agency management support and accountability; program guidance, coordination, and monitoring; and agency practices. As a result, EEOC’s on-site reports contained numerous and significant recommendations. For example, EEOC’s report on the Department of the Interior’s affirmative employment program contained 43 specific recommendations for improvements in almost all aspects of Interior’s program. EEOC recommended, among other things, that Interior set specific objectives to address the underrepresentation of EEO groups, establish time frames for accomplishing objectives, and hold responsible officials accountable for their implementation. EEOC’s report on Navy’s program recommended that Navy address the underrepresentation of women and minorities in its SES and upper grade levels; evaluate its Merit Promotion Program for adverse impact on women, minorities, and people with disabilities; establish uniform EEO performance standards for managers and supervisors, including civilian affirmative employment and EEO responsibilities in military evaluation reports; and accelerate the separations analysis needed to address the high rate at which minorities and women are separated from Navy. Navy agreed to implement all of EEOC’s recommendations. EEOC generally followed the criteria it developed for evaluating the agencies’ programs. The criteria, as stated in MD-714, consists of evaluating an agency affirmative employment program on the basis of positive changes in the participation of EEO groups in the work force; successful hiring and internal movement activity; successful completion of the affirmative employment action plan; completeness and accuracy of required information; and effectiveness of the agency’s internal monitoring and evaluation system. Oversight of affirmative recruitment and employment programs helps to (1) ensure that agencies are taking the necessary steps to eliminate the underrepresentation of women and minorities as required by law, and (2) provide these agencies with meaningful feedback and assistance on how to improve their programs. We found that in reviewing agencies’ FEORP plans, OPM does not require agencies to follow all the requirements set forth in regulations. And, while OPM increased the number of its on-site reviews in fiscal year 1993, its reviews have not fully addressed the success of agencies’ recruitment efforts. Determining the effect of the recruitment program will require that OPM assist agencies in collecting and analyzing recruitment data. In October 1991, we recommended that OPM act in coordination with EEOC to examine options for collecting and analyzing applicant flow data and take prompt appropriate action. EEOC’s on-site reviews have addressed significant program issues but have been limited in number. As a result, many agencies were not getting critical information on how to improve their programs. EEOC has since increased the number of reviews, adding 36 reviews since June 1993. In a letter dated June 14, 1995, the Deputy Assistant Secretary of Defense (Equal Opportunity) concurred with our findings and conclusions and provided updated information on the Navy’s affirmative employment efforts (see app. III). In a letter dated June 5, 1995, the Director of Interior’s Office of Equal Opportunity said that our analysis was useful and provided additional updated information (see app. IV). The Director of OPM, in a letter dated June 20, 1995, said that our report underscores the findings of the National Performance Review that there is duplication between the requirements and oversight roles of OPM and EEOC and that current requirements place too much emphasis on process rather than results (see app V). The Department of State’s Director of EEO and a personnel specialist from the Department of Agriculture’s Office of Personnel provided oral comments on a draft of this report in July 1995 meetings. Both provided technical suggestions that we have incorporated, where appropriate. In a letter dated July 7, 1995, the Chairman, EEOC, disagreed with our assertions that (1) federal agencies had not followed EEOC’s instructions in their analyses of affirmative employment programs and had submitted incomplete plans, and (2) EEOC had approved the incomplete plans, thereby indicating that EEOC was not providing the oversight necessary to ensure that the proper affirmative action program analyses were being done (see app. VI). In support of its position, EEOC articulated an interpretation of MD-714’s reporting requirements that was different from the one we had been provided by EEOC officials during the course of past reviews. According to the interpretation EEOC articulated in its comments, MD-714 provides agencies leeway in determining which program elements to report in their plans. Under this interpretation, we agree that the plans that our draft report had characterized as incomplete could instead be viewed as complete. We have revised the report to reflect this interpretation and to incorporate additional technical suggestions, as appropriate. A more important issue than the completeness of the plans is the underlying analyses upon which the plans are based. In its comments, EEOC said that the program analysis questions in MD-714 are also considered as guidance and not specific requirements. However, EEOC’s January 21, 1988, memorandum to federal agencies on affirmative employment planning says otherwise. The memorandum states that “The program analysis is the foundation upon which the agency’s entire plan will be based. Therefore, each agency should ensure that it performs a comprehensive assessment of how the agencies’ efforts are directed toward the eight major program elements. The analysis must provide complete rationale for responses to the questions that follow each element. It is not necessary that the analysis be limited to just those questions.” The memorandum also states that agencies should maintain documentation which supports the agency’s identification of barriers and development of objectives. Thus, while agencies need not report on all eight program elements in their plans, current MD-714 guidance requires that agencies use those elements in their analyses and maintain supporting documentation. Because reports may not include all of the relevant information, it is important for EEOC to perform on-site reviews which include evaluations of agencies’ program analyses. We believe that, as discussed in Chapter 4, EEOC’s increased rate of completed on-site reviews, if continued and done effectively, should help provide the necessary oversight for agency affirmative employment programs. On the issue of collecting data on job applicants, OPM’s and EEOC’s comments reflect different points of view. OPM said that it is opposed to collecting data from job applicants concerning their race and national origin because it believes that the collection of such data would be costly, ineffective, and a reporting burden. OPM also said that agencies should be held accountable for the compositions of their selections. In contrast, EEOC said that it believes the collection of applicant flow data is necessary to hold agency officials accountable and is also required by regulation. We have previously found that agencies frequently believed applicant flow data was useful and recommended reestablishing collection of that data. | Pursuant to a congressional request, GAO reviewed: (1) the representation of women and minorities at the Departments of the Interior, Agriculture (USDA), Navy, and State as compared to the agencies' total workforce and the nation's civilian labor force (CLF); (2) these agencies' compliance with the Equal Employment Opportunity Commission's (EEOC) affirmative employment planning instructions; and (3) the extent of EEOC and Office of Personnel Management (OPM) oversight of the agencies' affirmative employment and recruitment programs. GAO found that: (1) between 1984 and 1992, the relative number of women and minorities increased in each agency, but certain equal employment opportunity (EEO) groups were underrepresented overall, particularly in key and higher grade positions when compared to the CLF; (2) underrepresentation in key jobs was more pronounced for white and minority women than for minority men; (3) the underrepresented EEO groups varied by agency; (4) white men still occupied 75 percent or more of the agencies' senior executive service or equivalent positions; (5) although women and minorities were hired and promoted into key agency jobs in greater numbers than their workforce representation, they also separated from the agencies at higher rates; (6) the agencies' multiyear affirmative employment planning programs did not fully comply with EEOC directives because the agencies considered the directives to be only guidelines, they lacked certain personnel data, senior managers were not involved in the plans' preparation, the agencies did not take the plans seriously, and EEOC approved incomplete plans; (7) those agencies that set employment goals did not link them to any particular underrepresention problem; (8) there were no formal mechanisms to hold agency heads and senior managers accountable for their agencies' EEO programs; and (9) OPM and EEOC did not provide sufficient oversight to ensure that the agencies' affirmative employment programs effectively corrected their workforce imbalances. |
In the past, the ICC regulated almost all of the rates that railroads charged shippers. The Railroad Revitalization and Regulatory Reform Act of 1976 and the Staggers Rail Act of 1980 greatly increased reliance on competition to set rates in the railroad industry. Specifically, these acts allowed railroads and shippers to enter into confidential contracts that set rates and prohibited ICC from regulating rates where railroads had either effective competition or rates negotiated between the railroad and the shipper. Furthermore, the ICC Termination Act of 1995 abolished ICC and transferred its regulatory functions to STB. Taken together, these acts anchor the federal government’s role in the freight rail industry by establishing numerous goals for regulating the industry, including to allow, to the maximum extent possible, competition and demand for services to establish reasonable rates for transportation by rail; minimize the need for federal regulatory control over the rail transportation system and require fair and expeditious regulatory decisions when regulation is required; promote a safe and efficient rail transportation system by allowing rail carriers to earn adequate revenues, as determined by STB; ensure the development and continuation of a sound rail transportation system with effective competition among rail carriers and with other modes to meet the needs of the public and the national defense; foster sound economic conditions in transportation and ensure effective competition and coordination between rail carriers and other modes: maintain reasonable rates where there is an absence of effective competition and where rail rates provide revenues that exceed the amount necessary to maintain the rail system and attract capital; prohibit predatory pricing and practices to avoid undue concentrations of provide for the expeditious handling and resolution of all proceedings. While the Staggers Rail and ICC Termination Acts reduced regulation in the railroad industry, they maintained STB’s role as the economic regulator of the industry. The federal courts have upheld STB’s general powers to monitor the rail industry, including its ability to subpoena witnesses and records and to depose witnesses. In addition, STB can revisit its past decisions if it discovers a material error, or new evidence, or if circumstances have substantially changed. Two important components of the current regulatory structure for the railroad industry are the concepts of revenue adequacy and demand-based differential pricing. Congress established the concept of revenue adequacy as an indicator of the financial health of the industry. STB determines the revenue adequacy of a railroad by comparing the railroad’s return on investment with the industrywide cost of capital. For instance, if a railroad’s return on investment is greater than the industrywide cost of capital, STB determines that railroad to be revenue adequate. Historically, ICC and STB have rarely found railroads to be revenue adequate—a result that many observers relate to characteristics of the industry’s cost structure. Railroads incur large fixed costs to build and operate networks that jointly serve many different shippers. Some fixed costs can be attributed to serving particular shippers, and some costs vary with particular movements, but other costs are not attributable to particular shippers or movements. Nonetheless, a railroad must recover these costs if the railroad is to continue to provide service over the long run. To the extent that railroads have not been revenue adequate, they may not have been fully recovering these costs. The Staggers Rail Act recognized the need for railroads to use demand- based differential pricing to promote a healthy rail industry and enable it to raise sufficient revenues to operate, maintain and, if necessary, expand the system in a deregulated environment. Demand-based differential pricing, in theory, permits a railroad to recover its joint and common costs—those costs that exist no matter how many shipments are transported, such as the cost of maintaining track— across its entire traffic base by setting higher rates for traffic with fewer transportation alternatives than for traffic with more alternatives. Differential pricing recognizes that some customers may use rail if rates are low—and have other options if rail rates are too high or service is poor. Therefore, rail rates on these shipments generally cover the directly attributable (variable) costs, plus a relatively low contribution to fixed costs. In contrast, customers with little or no practical alternative to rail—”captive” shippers—generally pay a much larger portion of fixed costs. Moreover, even though a railroad might incur similar incremental costs while providing service to two different shippers that move similar volumes in similar car types traveling over similar distances, the railroad might charge the shippers different rates. Furthermore, if the railroad is able to offer lower rates to the shipper with more transportation alternatives, that shipper still pays some of the joint and common costs. By paying even a small part of total fixed cost, competitive traffic reduces the share of those costs that captive shippers would have to pay if the competitive traffic switched to truck or some other alternative. Consequently, while the shipper with fewer alternatives makes a greater contribution toward the railroad’s joint and common costs, the contribution is less than if the shipper with more alternatives did not ship via rail. The Staggers Rail Act further requires that the railroads’ need to obtain adequate revenues to be balanced with the rights of shippers to be free from, and to seek redress from, unreasonable rates. Railroads incur variable costs—that is, the costs of moving particular shipments—in providing service. The Staggers Rail Act stated that any rate that was found to be below 180 percent of a railroad’s variable cost for a particular shipment could not be challenged as unreasonable and authorized ICC, and later STB, to establish a rate relief process for shippers to challenge the reasonableness of a rate. STB may consider the reasonableness of a rate only if it finds that the carrier has market dominance over the traffic at issue—that is, if (1) the railroad’s revenue is equal to or above 180 percent of the railroad’s variable cost (R/VC) and (2) the railroad does not face effective competition from other rail carriers or other modes of transportation. The changes that have occurred in the railroad industry since the enactment of the Staggers Rail Act are widely viewed as positive. The railroad industry’s financial health improved substantially as it cut costs, boosted productivity, and right-sized its networks. Rail rates generally declined between 1985 and 2000 but increased slightly from 2001 through 2004. Likewise, rail rates have declined since 1985 for certain commodity groups and routes despite some increases since 2001, but rates have not declined uniformly, and some commodities are paying significantly higher rates than others. For example, from 1985 through 2004, coal rates declined 35 percent while grain rates increased 9 percent. Concerns about competition and captivity in the industry remain because traffic is concentrated in fewer railroads. It is difficult to determine precisely how many shippers are captive to one railroad. Nevertheless, our analysis indicates that the extent of potential captivity appears to be dropping, but that the percentage of all industry traffic running at rates substantially over the statutory threshold for rate relief—traffic traveling at rates over 180 percent R/VC—has increased. Furthermore, some areas with access to only one Class I railroad have higher percentages of traffic traveling at rates that exceed the statutory threshold for rate relief. This situation may reflect reasonable economic practices by the railroads in an environment of excess demand, or it may represent an abuse of market power. There is widespread consensus that the freight rail industry has benefited from the Staggers Rail Act. Ten of the 11 members of our expert panel believed that the Staggers Rail Act has had a strongly positive overall effect on freight railroad companies, while 8 believed the Staggers Rail Act had a strongly positive effect on shipping companies. In addition, various measures indicate an increasingly strong freight railroad industry. Freight railroads’ improved financial health is illustrated by a general increase in return on investment since 1980, as shown in figure 1. Freight railroads have also cut costs by streamlining their workforces; right-sizing their rail networks; and reducing track miles, equipment, and facilities to more closely match demand. Freight railroads have also expanded their business into new markets— such as the intermodal market—and implemented new technologies, including larger cars, and are currently developing new scheduling and train control systems. Some observers believe that the competition faced by railroads from other modes of transportation has created incentives for innovative practices, and that the ability to enter into confidential contracts with shippers has permitted railroads to make specific investments and to develop service arrangements tailored to the requirements of different shippers. Clifford Winston, Deregulation of Network Industries – What’s Next? (Washington: AEI- Brookings Joint Center for Regulatory Studies: 2000), pp. 43-44. Freight rail is an important component of our nation’s economy. Approximately 42 percent of all intercity freight in the United States, measured in ton-miles, moves on rail lines. Freight rail is particularly important to producers and users of certain commodities. For example, about 70 percent of automobiles manufactured domestically and about 70 percent of coal delivered to power plants moves on freight rail. Rail rates across the freight railroad industry have generally declined since the enactment of the Staggers Rail Act. Because changes in traffic patterns over time (for example, hauls over longer distances) can result in a decrease in the average revenue per ton-mile, purely relying on cents per ton-mile can present misleading industrywide rate trends. Therefore, we developed a set of rail rate indexes to examine trends in rail rates over the 1985 through 2004 period. These indexes account for changes in traffic patterns over time that could affect revenue statistics but do not account for inflation. To provide a measure for inflation, we also included the price index for the gross domestic product (GDP) in figure 2. From 1985 through 1987, rail rates dropped by 10 percent and then continued to decline, although not as steeply, through 1998. Rates increased in 1999, then dropped again in 2000. In 2001 and 2002 rates rose again. Rates were nearly flat in 2003 and 2004, finishing approximately 3 percent above rates in 2000, but were 20 percent below 1985 rates (These trends are shown in figure 2). While our rail rate index does not reflect the general effects of inflation, the continuous increases in the GDP price index over this period indicate that real rates decreased by more than 20 percent from 1985 through 2004. Rate data are not available for 2005 and 2006, but shippers, railroad officials, and financial analysts with whom we spoke told us that rates have generally increased during those years. Similar to industrywide changes in rail rates, the rates for many commodities have declined since 1985 and have recently increased. In 2004, four commodities each made up 5 percent or more of freight railroad revenue—grain, coal, motor vehicles, and miscellaneous mixed shipments. In both the 1985 through 1989 and the 1990 through 1999 intervals, the rates for most of these commodities declined, while in 2000 through 2004, the rates increased for two commodities and decreased for two (see fig. 3). Although many rates have decreased, rates have not declined uniformly, and rates for some commodities are significantly higher than for others. Figure 4 compares commodity rates for coal, grain, miscellaneous mixed shipments, and motor vehicles from 1985 through 2004 using our rail rate index. Over the 20-year period most rates declined, with coal rates dropping the most sharply by 35 percent. Miscellaneous mixed shipments and motor vehicle rates also declined, although to a lesser extent than coal rates. Grain rates initially declined from 1985 through 1987, but then diverged from the other commodity trends and increased, resulting in a net 9 percent increase by 2004. We examined rate changes for commodities traveling along hundreds of particular routes and found that the rates on a majority of the routes we analyzed decreased from 2000 through 2004. Figure 5 shows that from 2000 through 2004 rail rates decreased on about 55 percent of the routes in our analysis(334 of 604 routes). More specifically, the rates for most long-distance (over 1,000 miles) and medium-distance (501 to 1,000 miles) routes decreased. In one distance category, short-distance routes (up to 500 miles), there were more routes with increases (103) than decreases (94), from 2000 through 2004. While figure 5 shows that, for the long- distance routes we examined, the number of routes with rate decreases was nearly twice the number of routes with rate increases. Many of the largest rate increases were on long-distance routes carrying miscellaneous mixed shipments—which include intermodal goods—that originated in the Los Angeles-Long Beach-Riverside, California, economic area and terminated at various destinations across the country. Several shipper groups reported that many rate increases occurred after 2004; however, data are not available for 2005 and 2006. Several factors could have contributed to recent rate increases. Ongoing industry and economic changes have influenced how railroads have set their rates. Since the Staggers Rail Act was enacted, the railroad industry and the economic environment in which it operates have changed considerably. After years of reducing the size of its workforce and shedding track capacity, the industry is increasingly operating in a capacity-constrained environment in which the demand for its services exceeds its capacity in some areas. In addition, the industry has more recently increased employment and invested in increased capacity in key traffic corridors. Additionally, changes in broader domestic and world economic conditions have led to changes in the mix and profitability of traffic carried by railroads. For example, railroads have developed high- volume traffic by shipping import and export containers, leading them to price these shipments differently. According to DOT officials, some shippers—such as those in the automobile and chemical industries—may pay higher rates in order to secure higher quality service or due to liability issues. Lastly, the rail industry has continued to consolidate, potentially increasing the market power of the largest railroads. Our analysis included rate data through 2004, and according to freight railroad officials, shippers, and financial analysts, since 2004, rates have continued to increase as the demand for freight rail service has increased, and rail capacity has not kept pace with demand. While rates have generally decreased since 1985, other costs have been passed on to shippers, some of which STB has not accurately tracked. Several shippers with whom we spoke agreed that rates have dropped over the long-term, but they also said that rates do not reflect the total cost of shipping by rail. According to some shippers, costs have shifted from the railroads to shipping companies, including the costs of railcar ownership. Figure 6 shows that tons carried by railcar ownership has shifted nearly 20 percent since 1985, indicating less tonnage shipped on railcars owned by freight railroad companies. Besides rates, other costs that shippers reported were infrastructure upgrade costs, fuel surcharges, and congestion fees. Conversely, one Class I railroad told us that some rates in the Carload Waybill Sample do not account for rebates or incentives that may change the actual rate paid by the shipper. We are unable to report on the full extent of all costs because STB has not accurately tracked the railroad revenues associated with some of these charges. For example, freight railroad companies do not consistently report revenues raised from fuel surcharges for use in the Carload Waybill Sample. Some railroads report fuel surcharges as part of their general revenues, while others categorize the surcharges separately under a miscellaneous revenue category, and still other railroads may not report revenue collected from fuel surcharges at all. Shippers have expressed deep concerns over how fuel surcharges relate to actual fuel costs. Other railroad revenues, such as those generated at railcar auctions and through congestion fees, may not be included in the waybill sample either. Understanding what railroads do and do not report as miscellaneous revenue in the waybill sample may be of increasing importance because fuel surcharges have become more prevalent, and railroad revenue reported as miscellaneous revenue has substantially risen in recent years. From 2000 through 2004, the miscellaneous revenue reported in the waybill sample has more than quadrupled in value, from $141 million to $614 million (see fig. 7). Although an increase in value, $614 million still represents less than 1.5 percent of the approximately $42 billion in freight railroad revenue reported for 2004. Since 2004, miscellaneous revenue may have further increased as railroad and shipper groups with whom we spoke said that many fuel surcharge increases took effect in 2005. During our review, STB proposed to more closely track and otherwise monitor revenues associated with fuel surcharges. Concerns about competition and captivity in the railroad industry remain because traffic is concentrated in fewer railroads, although there is disagreement on the state of competition in the industry. It is difficult to determine the number of captive shippers, because proxy measures can overstate or understate captivity, but our analysis of available measures indicates that the extent of captivity is dropping. At the same time, the percentage of all industry traffic running substantially over the statutory threshold for rate relief has increased from about 4 percent of tonnage in 1985 to about 6 percent of tonnage in 2004. Furthermore, some economic areas with access to one Class I railroad have higher percentages of traffic traveling at rates that exceed the statutory threshold for rate relief. During the past 30 years, the freight railroad industry has become more concentrated. In 1976, there were 30 independent Class I railroad systems, consisting of 63 Class I railroads operating in the United States. Currently there are seven railroad systems, consisting of seven Class I railroads. Nearly half of that reduction was attributable to consolidations. The railroad industry is dominated by four Class I railroads—two in the East and two in the West. As figure 8 shows, the market share of these four Class I railroads has been increasing and accounted for over 89 percent of the industry’s revenues in 2004. There is significant disagreement on the state of competition in the rail industry and on whether or not federal regulation—resulting from legislation such as the Staggers Rail Act—has ensured effective competition among railroads. This disagreement was represented on our panel of 11 experts, 6 of whom indicated that rail-to-rail competition has been achieved (either “greatly” or “somewhat”) and 4 of whom maintained that effective competition had not been achieved. One member of our panel viewed less competition among rail carriers as a negative development because it can result in less efficient railroad companies and fewer options for shipping companies. Another member of our panel said that industry consolidation was essential to achieving an efficient and complete rail network under fewer, but ultimately stronger, railroad companies. Other experts also pointed to the hundreds of short-line railroads that have come into being since the enactment of the Staggers Rail Act, as well as increases in other competitive options for shippers from other modes such as trucks and barges. A reduction in competitive options can have a significant impact on the rates railroads charge shippers. There are a variety of contexts that affect how railroads compete with each other and with other modes, such as when route origins and destinations can both be reached by more than one railroad, or by multiple modes of transportation. Comparing two routes for shipping the same commodity, but using a different number of rail carriers, can illustrate this effect. Figure 9 shows two long-distance grain routes that both terminate in the Portland, Oregon, economic area from different origin points. Both routes carry comparable tonnage, but the route originating in the economic area in and around Sioux Falls, South Dakota, is served by two Class I railroads, whereas the route from the Minot, North Dakota, economic area is served by one Class I railroad. The rates for the Minot route are roughly double the rates for the Sioux Falls route. The ability to build out to another railroad can also create competition and improve railroad rates for some shippers. For example, following a build- out, a shipper gained access to a second railroad at an origin point that had previously been served by one Class I railroad. Figure 10 shows that within a few years after the introduction of service by the second railroad, the rates had dropped significantly. Because even a short segment build- out can be quite costly, shippers are unlikely to pursue build-out options without a substantial traffic base. Some experts with whom we spoke said that situations like the one depicted in figure 9 reflect the reality of differential pricing in the freight railroad industry, or they suggest that other factors such as differences in the length of two different routes may be the cause of rate discrepancies. Others believe that a significant rate decrease after the introduction of competition is evidence that railroads are extracting monopoly rates from captive shippers. While competition between rail carriers is particularly important in some cases, in other cases, competition between rail and other transportation modes, such as trucks and barges, may be more important. Particularly for bulk commodities (i.e., grain), when shipper locations can be served by barge transportation, rail rates will be lower relative to rail costs than on routes that are not conducive to barge competition. Figure 11 depicts costs and revenues for two routes, one (from the Champaign, Illinois economic area to the New Orleans, Louisiana economic area) with rail and barge options, and the other (from the Champaign, Illinois economic area to the Atlanta, Georgia economic area) with just a rail option. Although both routes have the same origin, for shipping the same commodity over a comparable distance, the route with the barge option has consistently lower rates than the route with just rail service. Besides the number of rail carriers serving a location, the use of contracts for rail service can affect the competitive landscape. The Staggers Rail Act allowed railroad and shipping companies to enter into confidential contracts for rail service and also placed all traffic running under contract outside the remaining rate regulations. According to railroad and shipper groups, the duration of contracts has declined, in part because of the railroads’ desire to quickly react to shifting market demand, which can result in charging higher rates. Other shippers were concerned that moving away from confidential contracts to public pricing could represent price signaling and further reduce competition between railroads. In 2004, 70 percent of tonnage and 71 percent of industry revenue moved under contract. It is difficult to determine precisely how many shippers are “captive” to one railroad because the proxy measures that provide the best indication can overstate or understate captivity. One way of determining potential captivity is to identify which Bureau of Economic Analysis (BEA) economic areas were served by only one Class I railroad. In 2004, 27 of the 177 BEA economic areas were served by only one Class I railroad. As shown in figure 12, these areas include parts of Montana, North Dakota, New Mexico, Maine, and smaller areas in several states. Another way of looking at potential captivity is to calculate how much route tonnage originating in a given economic area has access to only one Class I railroad. Figure 13 shows the percentage in 2004 of all industry tonnage originating in economic areas with access to only one Class I railroad. In particular, economic areas with more than 75 percent of tonnage shipped on one railroad appear most prevalent in states such as Montana, Idaho, North Dakota, and Texas. Tonnage originating in these economic areas varies widely, from a little over 55,000 tons to over 36 million tons. According to our analysis of available measures, the overall extent of captivity appears to be dropping in the freight railroad industry. We examined tonnage, revenue, and access statistics for all routes— originating and terminating in economic areas—captured in the Carload Waybill Sample and other DOT data. In 2004, origin and destination routes with access to only one Class I railroad carried 12 percent of industry revenue and 10 percent of industry tonnage, which represents a decline from 1994, when 22 percent of industry revenue and 21 percent of industry tonnage moved on routes served by one Class I railroad (see table 1). This decline suggests that more railroad traffic is traveling on routes with access to more than one Class I railroad. While overall industry tonnage with access to more than one Class I railroad appears to have increased, some economic areas have a higher percentage of all industry traffic tonnage shipping on one Class I railroad. From 1994 through 2004, parts of states such as Texas, Tennessee, and Montana experienced increases of 25 percent or more in tonnage with access to one Class I railroad while parts of other states such as Oregon, New York, and Florida saw their percentages of tonnage with access to one Class I railroad drop by more than 25 percent (see fig. 14). While examining BEA areas provides a proxy measure for captivity, a number of factors may understate or overstate whether shippers are actually captive. The first three factors may work to understate the extent of captivity among shippers. First, routes originating within economic areas served by multiple Class I railroads may still be captive if only one Class I railroad serves their destination, and a shipper must use that one railroad for that particular route. Second, some BEA areas are quite large, so a shipper within the area may have access to only one railroad, even though there are two or more railroads within the broader area. Third, an origin may only be served by one Class I railroad, but one Class I railroad does not serve the entire route, meaning the route may be partially captive, although more than one Class I railroad provides service between its origin and destination. Two additional limitations may work to overstate the number of locations captive to one railroad. First, this analysis accounts for Class I railroads only and does not account for competitive rail options that might be offered by Class II or III railroads such as the Guilford Rail System, which operates in northern New England. Second, this analysis considers only competition among rail carriers and does not examine competitive options offered by rail and other transportation alternatives such as trucks and barges. To determine potential captivity, we applied another measure— traffic traveling at rates equal to or greater than 180 percent R/VC, which is part of the statutory threshold for bringing a rate relief case before STB. STB regards traffic at or above this threshold as “potentially captive.” As with BEA areas, examining R/VC levels as a proxy measure for captivity can also understate or overstate captivity. For example, it is possible for the R/VC ratio to increase while the rate paid by a shipper is declining. Assume that in Year 1, a shipper is paying a rate of $20 and the railroad’s variable cost is $12; the R/VC ratio—a division of the rate and the variable cost—would be 167 percent. If in Year 2, the variable costs decline by $2 from $12 to $10 and the railroad passes this cost savings directly on to the shipper in the form of a reduced rate, the shipper would pay $18 instead of $20. However, as shown in table 2, because both revenue and variable cost decline, the R/VC ratio increases to 180 percent. Since 1985, and as a percentage of all traffic, the amount of potentially captive traffic traveling at rates over 180 percent R/VC and the revenue generated from that traffic have both declined. Revenue generated from traffic traveling at rates over 180 percent R/VC decreased from 41 percent of all industry revenue in 1985 to 29 percent in 2004 (see fig. 15). However, since 1985, tonnage from traffic traveling at rates substantially over the threshold for rate relief has increased. Total industry tonnage has increased significantly (from 1.37 billion tons in 1985 to 2.14 billion tons in 2004), with the tonnage traveling at rates above 300 percent R/VC more than doubling—from about 53 million tons in 1985 to over 130 million tons in 2004 (see fig. 16). As a percentage of all industry traffic, traffic traveling at rates between 180 and 300 percent R/VC decreased from 36 percent in 1985 to 25 percent in 2004. In contrast, the percentage of all industry traffic traveling at rates above 300 percent R/VC increased from 4 percent in 1985 to 6 percent in 2004 (see fig. 17). Increases in traffic traveling at rates over 300 percent R/VC appear widely distributed throughout the country, although in some areas increases have been higher than in others. Four economic areas located in parts of Montana, New Mexico, North Dakota, and West Virginia had the largest increases in traffic traveling at rates over 300 percent R/VC, with an increase of more than 25 percent from 1985 through 2004 (see fig. 18). In addition to national changes, significant increases in traffic traveling at rates over 300 percent R/VC can be seen in certain states, for certain commodities, and for certain routes. For example, in 1985 virtually no coal originating in Ohio traveled at rates over 300 percent R/VC. In 2004, nearly half of coal traffic originating in Ohio traveled at rates over 300 percent R/VC. Increases in traffic traveling at rates over 300 percent R/VC can also be seen at the route level. Figure 19 shows the amount of traffic traveling at rates over 300 percent R/VC on long-distance grain routes from the Minot, North Dakota, and Billings, Montana, economic areas to the Portland-Vancouver-Beaver Falls, Oregon, economic area. Of the routes we examined, these two had the highest percentage of traffic traveling at rates over 300 percent R/VC for 2004, and on both routes, this traffic had substantially increased over 1985 levels. For both the Minot and Billings routes, increases in R/VC from 1985 through 2004 were driven more by increases in revenue than by changes in variable cost. From 1985 through 2004, revenue from all grain traffic—not just traffic traveling at rates above the statutory threshold for rate relief— on the Minot, North Dakota, to the Portland-Vancouver-Beaver Falls, Oregon, economic area increased from approximately $18.4 million to approximately $30.8 million. Variable cost increased at a much slower pace, rising from approximately $12.2 million to approximately $12.4 million. For the route from the Billings, Montana, economic area to the Portland-Vancouver-Beaver Falls, Oregon, economic area, grain revenue more than tripled, from approximately $11.2 million in 1985 to approximately $42.7 million in 2004. Variable cost also increased substantially—although still not as much as revenue—rising from approximately $5.5 million to approximately $15.1 million. Some economic areas with access to one Class I railroad also have more than half of their traffic traveling at rates that exceed the statutory threshold for rate relief. For example, parts of New Mexico and Idaho with access to one Class I railroad have more than half of all traffic originating in those same areas traveling at rates over 180 percent R/VC (see fig. 20). However, there are instances in which an economic area may have access to two or more Class I railroads and still have more than 75 percent of its traffic traveling at rates over 180 percent R/VC, as well as other instances in which an economic area may have access to one Class I railroad and have less than 25 percent of its traffic traveling at rates over 180 percent R/VC. Yet there are parts of the country with access to one Class I railroad that also have higher percentages of traffic traveling at rates over the statutory threshold for rate relief. Our analysis shows that some areas of the country with access to only one Class I railroad have higher levels of traffic traveling at rates over the statutory threshold for rate relief. This situation may reflect reasonable economic practices by railroads in an environment of excess demand, or it may represent an abuse of market power. Our analysis provides an important first step in assessing competitive markets nationally, but it is imperfect given the inherent limitations of the Carload Waybill Sample and of the proxy measures available for weighing captivity. When combined with comments from participants on our expert panel and interviews with shipper and railroad groups, the results of our analysis suggest that shippers in selected markets may be paying excessive rates, meriting further inquiry and analysis. The Staggers Rail and ICC Termination Acts promoted greater reliance on competition as the preferred method to protect shippers from unreasonable rates and granted STB broad authority to monitor the performance of the railroad industry. STB has taken a number of actions to provide protections for captive shippers from unreasonable rates in the absence of effective competition, including establishing a process for captive shippers to obtain relief from unreasonable rates. Despite STB’s actions, there is little effective relief for captive shippers because STB’s standard rate relief process is largely inaccessible. While STB continues to refine its practices, an assessment of competitive markets would provide further information about the extent of captivity among shippers and the merits of a range of proposed actions to enhance competitive options available to shippers. In addition, changes to the rate relief process could provide greater protection from unreasonable rates. The Staggers Rail and ICC Termination Acts encourage competition as the preferred way to protect shippers and to promote the financial health of the railroad industry. At the same time, the acts give STB the authority to adjudicate rate cases to resolve disputes between captive shippers and railroads upon receiving a complaint from a shipper; approve rail transactions, such as mergers, consolidations, acquisitions, prescribe new regulations, such as rules for competitive access and inquire into and report on rail industry practices, including obtaining information from railroads on its own initiative and holding hearings to inquire into areas of concern, such as competition. The federal courts have upheld STB’s general powers to monitor the rail industry, including its ability to subpoena witnesses and records and depose witnesses. STB has the authority and ability to inquire into and report on railroad practices, and it also has authority to take a number of actions based on the results of that inquiry. First, STB could issue a general rule making that would alter the administrative rules for the industry. For example, STB has the authority to require a railroad to make their terminal facilities available to another railroad under certain circumstances. Second, STB could reopen a past decision if it found a material error in the case, new evidence emerged, or circumstances affecting the case substantially changed. Finally, if STB received a complaint from a shipper, it could then launch a formal investigation and prescribe specific remedies to address the complaint. Under its adjudicatory authority, STB has taken a number of actions to provide protection for captive shippers. STB determines the reasonableness of challenged rates in the absence of competition upon receiving a complaint from a shipper. The rate relief process is the principal method by which shippers seek relief from unreasonable rates. STB developed standard rate case guidelines, under which captive shippers can challenge a rail rate and appeal to STB for rate relief. Under the standard rate relief process, STB assesses whether the railroad dominates the shipper’s transportation market and, if it finds market dominance, proceeds with further assessments to determine whether the actual rate the railroad charges the shipper is reasonable. STB requires that the shipper demonstrate how much an optimally efficient railroad would need to charge the shipper and construct a hypothetical, perfectly efficient railroad that would replace the shipper’s current carrier. As part of the rate relief process, both the railroad and the shipper have the opportunity to present their facts and views to STB, as well as to present new evidence. In 1999, we reported that shippers and shippers’ associations indicated that constructing a hypothetical railroad is difficult, particularly for small shippers, because the time and cost associated with the model’s development may outweigh the compensation afforded the shipper should STB determine that the challenged rate was unreasonable. Since we reported on the process in 1999, STB has taken several actions to reduce potential barriers for filing a complaint. For example, STB now conducts mediation to begin cases, has added staff to process cases, and has eliminated certain criteria for assessing whether a railroad dominates a shipper’s market. STB also created alternatives to the standard rate relief process, developing simplified guidelines, as Congress required, for cases in which the standard rate guidelines would be too costly or infeasible given the value of the cases. Under these simplified guidelines, captive shippers who believe that their rate is unreasonable can appeal to STB for rate relief, even if the value of the disputed traffic makes it too costly or infeasible to apply the standard guidelines. In addition, STB created a voluntary arbitration option that parties can use to resolve disputes over rates. Under its authority to approve rail transactions, STB has approved railroad mergers that it finds consistent with the public interest. STB has also taken action to ensure that any potential merger-related harm to competition is mitigated. STB’s mitigation efforts have focused on preserving competition where it could be lost at 2-to-1 points, for example, by imposing conditions that allow one railroad to operate over the tracks of another railroad (called trackage rights). STB has historically not taken action to introduce service where shippers have service by only one carrier. Under its authority to prescribe new regulations, STB established a process by which shippers can file a complaint if they are captive to one railroad and believe that the railroad is engaged in anticompetitive behavior. Under this process, if the shipper proves that the railroad is engaged in anticompetitive behavior, STB can prescribe remedies such as trackage rights that would give the shipper access to another railroad. Finally, under its authority to inquire into and report on the rail industry, STB instituted proceedings to review rail access and competition issues. For example, in April 1998, at the request of Congress, STB commenced a review of access and competitive service in the rail industry. In April 1998, STB decided to consider revising its competitive access rules. However, in its December 1998 report to Congress, STB declined to take further action on this issue because it had adopted new rules giving shippers temporary access to alternative routing options during periods of poor service. In addition, STB observed that the competitive access issue raises basic policy questions that are more appropriately resolved by Congress. In 2001, STB adopted new regulations for rail mergers that require the applicant to demonstrate that the merger would enhance, not just preserve, competition. Despite STB’s efforts, there is widespread agreement that STB’s standard rate relief process is inaccessible to most shippers and does not provide for expeditious handling and resolution of complaints. The process remains expensive, time consuming, and complex. While STB does not keep records of the cost of a rate case, shippers we interviewed agreed that the process can cost approximately $3 million per litigant. Shippers told us that, to initiate a case, the case would need to involve several million dollars so that it would be worthwhile to spend $3 million on a case that they could possibly lose. Thus, shippers noted that only large- volume shippers, such as coal shippers, with set origins and destinations have the money to be able to afford the STB rate relief process. In addition, shippers said that they do not use the process because it takes so long for STB to reach a decision. Lastly, shippers continue to state that the process is both time consuming and difficult because it calls for them to develop a hypothetical competing railroad to show what the rate should be and to demonstrate that the existing rate is unreasonable. Since 2001, only 10 cases have been filed, and these cases took between 2.6 and 3.6 years— an average of 3.3 years per case—to complete. Of those 10 cases, 9 were filed by coal shippers. The simplified guidelines also have not effectively provided relief for captive shippers. Although these simplified guidelines have been in place since 1997, a rate case has not been decided under the process set out by the guidelines. STB held public hearings in April 2003 and July 2004 to examine why shippers have not used the guidelines and to explore ways to improve them. At these hearings, numerous organizations provided comments to STB on measures that could clarify the simplified guidelines, but no action was taken. STB observed that parties urged changes to make the process more workable, but disagreed on what those changes should be. Several shipper organizations told us that shippers are concerned about using the simplified guidelines because they believe the guidelines will be challenged in court, resulting in lengthy litigation. STB officials told us that they—not the shippers—would be responsible for defending the guidelines in court. STB officials also said that if a shipper won a small rate case, STB could order reparations to the shipper before the case was appealed to the courts. STB’s arbitration option has never been used. Under this approach, an arbitrator would decide the rate, using a “give and take” approach—that is, the arbitrator would determine the rate without being required to pick one of the two offers. According to STB officials, this option has not been used, in part, because the cases that go before STB are contentious, with high monetary stakes. As a result, there is less willingness from either side to arbitrate. Shippers have not obtained relief through STB’s “competitive access” rules. Under these rules, shippers can file a complaint to request that one railroad obtain access to another railroad’s tracks when necessary to remedy anticompetitive behavior by the owning railroad. Shippers who file a complaint must show that the owning railroad has engaged in anticompetitive behavior. To date, STB has found that all complaints have failed to prove that the owning railroad has engaged in anticompetitive behavior. During our review, STB has continued to refine its processes for shippers to obtain relief from unreasonable rates and competitive access. For example, STB recently proposed a rule making to make changes to the simplified guidelines in order to respond to comments gathered at the STB hearings held in April 2003 and July 2004 to examine why those guidelines have not been used by shippers and to explore ways to improve the guidelines. In addition, STB is seeking public comment on several measures it has proposed to adopt regarding railroad practices involving fuel surcharges. The proposals follow STB’s May 2006 public hearing on how railroads calculate and charge fuel surcharges and respond to extensive testimony on these charges submitted to STB by the rail industry, the public, and railroad customers. STB announced its intent to hold a public hearing on certain issues related to rail transportation rates for grain. Lastly, STB recently requested written comments and held a public hearing in response to a petition filed by a shipper group to prevent, or put a time limit on, paper barriers, which are contractual agreements that may be made when a Class I railroad either sells or leases some of its track to another railroad (typically a short line railroad or regional railroad), but stipulates that virtually all traffic that originates on that line must interchange with the Class I railroad that sold the tracks or pay a penalty. The results of our analysis suggest a reasonable possibility that shippers in selected markets may be paying excessive rates related to a lack of competition in these markets. While our analysis of available measures shows that the extent of captivity appears to be dropping in the freight railroad industry, shippers that may be captive are paying substantially over the statutory threshold for initiating a rate relief case. This situation may simply reflect reasonable economic practices by railroads in an increasingly constrained environment in which demand for rail services increasingly exceeds supply, or it may represent an abuse of market power. Our analysis provides an important first step in assessing competitive markets nationally, but it is imperfect given the inherent limitations of the Carload Waybill Sample and the proxy measures available for weighing captivity. A more rigorous analysis of competitive markets nationally is needed—one that identifies the state of competition nationwide and inquires into pricing practices in specific markets. If this assessment determines that market power is being abused or the goals of the Staggers Rail Act are not being met, STB could consider several methods to ease competition concerns, such as initiating a generally applicable rule making; or, if a complaint is filed, providing specific remedies to increase competition. Shipper groups, economists, and other experts in the rail industry have suggested several alternative approaches as remedies that could provide more competitive options to shippers in areas of inadequate competition or excessive market power. These groups view these approaches as more effective than the rate relief process in promoting a greater reliance on competition to protect shippers against unreasonable rates. Some proposals would require legislative change, or a reopening of past STB decisions. These approaches each have potential costs and benefits. On the one hand, they could expand competitive options, reduce rail rates, and decrease the number of captive shippers as well as reduce the need for both federal regulation and a rate relief process. On the other hand, reductions in rail rates could affect railroad revenues and limit the railroads’ ability and potential willingness to invest in their infrastructure. In addition, some markets may not have the level of demand needed to support competition among railroads. However, in markets that do, the targeted approaches frequently proposed by shipper groups and others include the following: Reciprocal switching: This approach would allow STB to require railroads serving shippers that are close to another railroad to transport cars of a competing railroad for a fee. The shippers would then have access to railroads that do not reach their facilities. This approach is similar to the mandatory interswitching in Canada, which enables a shipper to request a second railroad’s service if that second railroad is within approximately 18 miles. Some Class I railroads already interchange traffic using these agreements, but they oppose being required to do so. Under this approach, STB would oversee the pricing of switching agreements. This approach could also reduce the number of captive shippers by providing a competitive option to shippers with access to a proximate but previously inaccessible railroad and thereby reduce traffic eligible for the rate relief process (see fig. 21). Terminal agreements: This approach would require one railroad to grant access to its terminal facilities or tracks to another railroad, enabling both railroads to interchange traffic or gain access to traffic coming from shippers off the other railroad’s lines for a fee. Current regulation requires a shipper to demonstrate anticompetitive conduct by a railroad before STB will grant access to a terminal by a nonowning railroad unless there is an emergency or when a shipper can demonstrate poor service and a second railroad is willing and able to provide the service requested. This approach would require revisiting the current requirement that railroads or shippers demonstrate anticompetitive conduct in making a case to gain access to a railroad terminal in areas where there is inadequate competition. The approach would also make it easier for competing railroads to gain access to the terminal areas of other railroads and could increase competition between railroads. However, it could also reduce revenues to all railroads involved and adversely affect the financial condition of the rail industry. Also, shippers could benefit from increased competition but might see service decline (see fig. 22). Trackage rights: This approach would require one railroad to grant access to its tracks to another railroad, enabling railroads to interchange traffic beyond terminal facilities for a fee. In the past, STB has imposed conditions requiring that a merging railroad must grant another railroad trackage rights to preserve competition when a merger would reduce a shipper’s access to railroads from two to one. While this approach could potentially increase rail competition and decrease rail rates, it could also discourage owning railroads from maintaining the track or providing high- quality service, since the value of lost use of track may not be compensated by the user fee and may decrease return on investment (see fig. 23). “Bottleneck” rates: This approach would require a railroad to establish a rate, and thereby offer to provide service, for any two points on the railroad’s system where traffic originates, terminates, or can be interchanged. Some shippers have more than one railroad that serves them at their origin and/or destination points, but have at least one portion of a rail movement for which no alternative rail route is available. This portion is referred to as the “bottleneck segment.” STB’s decision that a railroad is not required to quote a rate for the bottleneck segment has been upheld in federal court. STB’s rationale was that statute and case law precluded it from requiring a railroad to provide service on a portion of its route when the railroad serves both the origin and destination points and provides a rate for such movement. STB requires a railroad to provide service for the bottleneck segment only if the shipper had prior arrangements or a contract for the remaining portion of the shipment route. On the one hand, requiring railroads to establish bottleneck rates would force short-distance routes on railroads when they served an entire route and could result in loss of business and potentially subject the bottleneck segment to a rate complaint. On the other hand, this approach would give shippers access to a second railroad, even if a single railroad was the only railroad that served the shipper at its origin and/or destination points, and could potentially reduce rates (see fig. 24). Paper barriers: This approach would prevent or, put a time limit on, paper barriers, which are contractual agreements that can occur when a Class I railroad either sells or leases long term some of its track to other railroads (typically a short-line railroad and/or regional railroad). These agreements stipulate that virtually all traffic that originates on that line must interchange with the Class I railroad that originally leased the tracks or pay a penalty. Since the 1980s, approximately 500 short lines have been created by Class I railroads selling a portion of their lines; however, the extent to which paper barriers are a standard practice is unknown because they are part of confidential contracts. When this type of agreement exists, it can inhibit smaller railroads that connect with or cross two or more Class I rail systems from providing rail customers access to competitive service. Eliminating paper barriers could affect the railroad industry’s overall capacity since Class I railroads may abandon lines instead of selling them to smaller railroads and thereby increase the cost of entering a market for a would-be competitor. In addition, an official from a railroad association told us that it is unclear if a federal agency could invalidate privately negotiated contracts (see fig. 25). It will be important for policymakers, in evaluating these alternative approaches, to carefully consider the impact of each approach on the balance set out in the Staggers Rail Act. One significant consideration is the revenue adequacy of the railroads. The Staggers Rail Act established revenue adequacy as a goal for the industry and allowed the railroads to use differential pricing to increase their revenues. While the specific method for determining revenue adequacy has been controversial, the overall trend in revenue adequacy may be more important. In its last report for 2004, STB determined that one railroad is revenue adequate and that others are approaching revenue adequacy. It is too early to determine that the industry as a whole is achieving revenue adequacy. Nevertheless, this improvement in the railroads’ financial condition represents a significant shift in the rail industry because for decades after the enactment of the Staggers Rail Act, the railroads were all considered revenue inadequate. The railroads need sufficient revenue for infrastructure investment to keep pace with increased demand. However, each of these changes could decrease the amount of revenue the railroads receive. Yet, as the railroad’s revenue adequacy improves, the question arises as to what degree the railroads should continue to rely, for their investment needs, on obtaining significantly higher prices from those with greater reliance on rail transportation. To prevent problems with unreasonable rates, some shipper groups propose targeted approaches that would provide them with more competitive options. A number of different approaches have also been suggested to make the rate relief process less expensive, more expeditious, and therefore potentially more accessible. Each of the proposed approaches has both advantages and drawbacks. These approaches include the following: Increase the use of simplified guidelines: The simplified guidelines use standard industry average figures for revenue data instead of requiring the shipper to create a hypothetical railroad. This approach would reduce the time and complexity of the process; however, it may not provide such an accurate and precise a measure as the standard process. Both shippers and railroad officials with whom we spoke agree that it is confusing to determine who is eligible to use the process and how it would work. STB recently issued a proposed rule making to pursue changes to the simplified guidelines to provide captive shippers greater access to regulatory remedies for unreasonable rail rates. Increase the use of arbitration: Under arbitration, two parties present their case before an arbitrator, who determines the rate. This process replaces the shipper’s requirement to create a hypothetical railroad. Proponents of arbitration argue that the threat of arbitration can induce railroads and shippers to resolve their own problems and limit the need for federal regulation. In addition, the process is quicker and cheaper than the standard rate relief process. For example, Canada offers an arbitration process known as Final Offer Arbitration (FOA), under which both parties submit their best and final offers, and the arbitrator considers the argument from both sides and picks one rate offer from either the railroad or the shipper. FOA is quicker—statutorily, once the process begins it has to be completed within 60 days, or 30 days for disputes involving freight charges of less than $750,000, unless the parties agree to a different time frame. In addition, FOA is cheaper—estimates ranged up to $1 million Canadian dollars, for both parties. On the other hand, the decisions are good for only 1 year, so the process could in theory be revisited annually. Critics of this approach suggest that arbitration decisions may not be based on economic principles, such as the revenue and cost structure of the railroad, and arbitrators may not be knowledgeable about the railroad industry. Furthermore, opinions differ significantly about which types of disputes should be covered and what standards (if any) should apply. Develop an alternative cost methodology: STB could develop an alternative to the cost methodology used under the standard process in which a shipper must demonstrate how much an optimally efficient railroad would need to charge a shipper by constructing a hypothetical, perfectly efficient railroad that would replace its current carrier. For example, STB could use a long-run incremental cost approach to evaluate and decide rate cases. This process, which is used by the Federal Energy Regulatory Commission for regulating rates charged by pipeline companies, bases rates on the actual incremental cost of moving a particular shipment, plus a reasonable rate of return. This approach allows for a quick, standard method for setting prices, but does not take into account the need for differential pricing or the railroad’s need to charge higher rates in order to become revenue adequate. Structuring rate regulation around actual costs can also create potential disincentives for the regulated entity to control its costs. Recent forecasts predict that the demand for freight and freight rail transport will grow significantly in the future. While forecasts have limitations as guides to investing in new transportation infrastructure, they can present a plausible picture of future freight demand and capacity. Whether private rail companies will be able and willing to invest in new infrastructure capacity to meet projected future demand is uncertain. New rail capacity not only benefits each private rail company network, but it also has the potential to benefit the public by improving traffic flow, air quality, and safety at the national, state, and local levels. As a result, the public sector has increasingly been investing in freight rail projects. Federal involvement in the freight system should be consistent with the competitive marketplace and ensure that funding decisions reflect widespread public priorities. The demand for freight transportation in general and freight rail specifically is forecasted to increase, according to recent studies. Several of these studies also quantify their projections of the volume and value of future freight demand. The Freight Analysis Framework (FAF) is a comprehensive database and policy analysis tool maintained by DOT to help identify needed freight capacity improvements. In 2002, DOT projected, using this tool, that overall domestic and international freight demand would increase by more than 65 percent and 84 percent, respectively, by 2020. In 2003, the American Association of State Highway and Transportation Officials (AASHTO) released the Freight Rail Bottom Line Report, prepared by a consulting firm. This report describes the industry and its benefits to the nation, estimates the industry’s investment needs and capacity to meet these needs, and quantifies the consequences of underinvestment, including highway deterioration and congestion. The AASHTO study projected that, by 2020, overall domestic freight demand by ton would increase by 57 percent and international demand would increase by 99 percent. In 2005, the American Trucking Association’s (ATA) report U.S. Freight Transportation Forecast to 2016 projected tonnage and revenues for all freight modes. The report predicted that overall freight volume would increase by about 32 percent between 2004 and 2016. Freight rail demand is projected to increase less than overall freight demand and to grow at a slower rate than demand for other modes—such as truck and air freight. FAF projects that freight rail tonnage will grow about 55 percent by 2020, but this growth will not be as dramatic as for truck and air, and will account for a much smaller share of the market when measured on the basis of shipment value. AASHTO predicts that freight rail tonnage will increase 44 percent by 2020. However, it notes that this forecast actually indicates that rail will lose some market share. This estimate also assumes that considerable investment will be required—up to about $4 billion annually—to meet future demand. According to ATA’s forecast, freight rail tonnage will grow annually by 2.4 percent to 2010 and by 2.1 percent to 2016. While rail intermodal traffic is forecast to grow rapidly, the study anticipates that rail’s overall share of total freight tonnage will decrease slightly from about 15.6 percent in 2004 to about 15.4 percent in 2016. However, ow many factors can affect the accuracy of these predictions. Freight markets are volatile and unpredictable, and thus freight demand forecasts may prove to be off the mark. Similarly, much freight traffic is determined by trade that originates outside the United States. Moreover, since the data and models used to develop these freight demand forecasts are largely proprietary, we could not assess the validity or reasonableness of the assumptions used to develop the predictions. Nevertheless, forecasts of freight and freight rail demand are useful as one plausible scenario for the future. As the Congressional Budget Office (CBO) observed in a January 2006 report, forecasts of demand are best viewed as illustrative rather than quantitatively accurate. If demand does develop as forecasted, it is uncertain how able and willing railroads will be to invest in new capacity. Railroads do not prepare long- term capacity plans because of concern about the potential for significant economic changes—for example, officials at one Class I railroad stated that they prepare capacity improvements plans and demand projections for 3 to 5 years into the future, with frequent revisions. In addition, the railroads we interviewed were generally unwilling to discuss their future investment plans with us in any detail because this is business proprietary information. It is therefore difficult to comment on how railroads are likely to choose among their competing investment priorities for the future compared with various demand scenarios. Railroads’ ability and willingness to invest in new capacity to meet demand reflects a number of key considerations. For privately owned rail companies, a key business consideration is maximizing returns for shareholders. To do so, realizing the greatest return on investment from each investment decision is essential and is reinforced by pressure from shareholders. Rail investment involves private companies taking a substantial risk which becomes a fixed cost on their balance sheets, one on which they are accountable to stockholders and for which they must make capital charges year in and year out for the life of the investment. A railroad contemplating such an investment must be confident that the market demand for that infrastructure will hold up for 30 to 50 years. This is in sharp contrast to other modes such as highway infrastructure, which is paid for largely by public funds. Maximizing a rail company’s competitive position in key markets is important in deciding on investments in the company network’s size and facilities. For example, the growth of intermodal transport is a major development for freight rail because it stands to be the largest revenue generator for the Class I railroads. As a result, there is intense competition for this business, although intermodal business also means that freight rail both competes and cooperates with other freight modes. However, intermodal growth depends on the railroads’ ability to invest in the new capacity needed to meet this demand. Investment considerations are complicated by the current status of rail infrastructure. Although the rail network has been downsized, the infrastructure remains extensive but aging. Replacing, maintaining, and upgrading this infrastructure is extremely costly, as the Transportation Research Board emphasized in its analysis of critical transportation issues. Predicting the extent to which future rail investments will keep pace with projected freight rail demand is complicated by the extent of current rail needs. For example, an annual assessment of America’s infrastructure conducted by the American Society of Civil Engineering gave rail infrastructure a “C-” grade and noted that, for the first time in 90 years, limited capacity has created significant bottlenecks in the national rail network. However, railroads must invest in new infrastructure, new equipment, and substantial new capacity to handle additional traffic in order to remain viable and effective, a rail industry representative told our expert panel. Today, freight railroads are sufficiently profitable to be investing at record levels. Major freight railroads have reported that they expect to invest about $8 billion in infrastructure during 2006—a 21 percent increase over 2005—and have told us that they plan to continue making infrastructure investments. However, not all of this investment is planned for capital or new capacity. Although we requested additional detail about how the rail industry’s $8 billion estimated investment was divided between new capacity and maintenance or renewal of existing capacity, the Association of American Railroads indicated that this information is not currently available but will be part of a special study on railroad spending trends. While private rail networks obtain benefits and improve their profitability from investments in their capacity, these investments also can benefit the public. In fact, some public benefits can be large in comparison to anticipated benefits to the private rail network, as the CBO report pointed out. For example, shifting truck freight traffic to railroads can reduce highway congestion for passenger and commercial vehicles, potentially reducing or avoiding public expenditures that otherwise would be needed to build additional highway capacity or provide additional maintenance to accommodate growing truck traffic. Depending on the rail infrastructure project, the public could realize several types of benefits, as described in table 3. Rail projects can vary widely in the extent to which they may generate public as well as private benefits; whether benefits are realized by the private or public sector at the national, state, and local levels; and how the benefits are quantified for the purpose of fairly apportioning project financing. Determining what benefits and costs are associated with a rail infrastructure project and who benefits is important in deciding whether public funds for public benefits are justified—but this is a difficult determination. For example, one rail infrastructure project that reduces system bottlenecks may generate benefits to the national economy by lowering the costs of producing and distributing goods. Another rail project that eliminates or improves highway-rail crossings may primarily produce local benefits by reducing accidents, time lost waiting for trains to pass, pollution and noise from idling trains, and delays of emergency vehicles at crossings. The same project also may produce national benefits by reducing the impact of train delays on the system. Increasingly, governments at all levels have been investing in freight rail improvement projects that offer potential public benefits. At the state and local levels, government involvement has ranged from planning and coordination to collaboration and investment with freight rail companies and other stakeholders. Some states have been investing to help short-line railroads maintain track in their states for almost 20 years. Other states— such as Florida, Virginia, New York, and Pennsylvania—are creating significant new programs to invest in rail projects. Over 30 states have published freight plans that describe their goals and approach to freight and freight rail. The scope of state and local freight rail investments continues to expand. For example, Missouri state and local governments, in partnership with railroads and other stakeholders, supported two major rail bridge flyover projects to reduce rail delays in Kansas City. These projects—totaling $134 million—were expected to provide economic benefits and reduce rail transit time through the city by about 2 hours. The project also used an innovative institutional arrangement that created a special type of corporation to facilitate its funding. Colorado’s Department of Transportation (CDOT), other public entities, and two Class I railroads are exploring an ambitious partnership to relocate freight train facilities away from the heavily populated Front Range area of the state, as the two railroads proposed. CDOT initiated a benefit-cost study that found sufficient public transportation, economic development, land use, safety, environmental, and passenger rail facilitation benefits to warrant investing public dollars in the project—estimated to cost about $1.17 billion. The federal government also has been involved in freight rail projects. In 1997, DOT provided a $400 million loan for the $2.4 billion Alameda Corridor project to leverage funds from ports, railroads, and local governments. As a result, a 20-mile trench for trains was constructed to eliminate numerous rail-highway crossings and reduce rail transport time to and from the ports of Los Angeles and Long Beach—a significant gateway for freight imported from Asia and distributed throughout the United States. In 2005, Congress provided $100 million to the $1.5 billion Chicago Region Environmental and Transportation Efficiency (CREATE) program. Its objective is to cut train delays and congestion and improve passenger rail service by separating 25 rail-highway crossings, building 6 passenger/freight train flyovers, and upgrading tracks and controls to improve service for the one-third of the nation’s rail traffic that comes through Chicago each day. Railroads and state and local governments are contributing to the program’s financing. In 2005, Congress also passed the Safe, Accountable, Flexible, Efficient Transportation Equity Act: A Legacy for Users (SAFETEA-LU), which increased the authorized level of funds available under the Railroad Rehabilitation and Improvement Financing (RRIF) program from $3.5 billion to $35 billion over a 5-year period. This program provides loans or loan guarantees that are available to states or railroads for projects to acquire, improve, or rehabilitate rail infrastructure. A number of proposals before Congress would increase federal funding for freight railroad projects. One proposal calls for the creation of a Railroad Trust Fund that would be similar to the Highway Trust Fund, which is used to pay for highway construction and improvements. Another proposal calls for a railroad investment tax credit. Under this proposal, railroads or shippers would receive a 25 percent tax credit for money spent to expand rail infrastructure. Federal decision makers face considerable uncertainty about the future of freight transportation coupled with considerable certainty that the federal deficit will be a long-term constraint on federal investment. At the same time, Congress will continue to face policy and funding decisions that will affect all freight modes and have a critical impact in shaping the nation’s rail system and infrastructure. As we have noted in our past work, a strategic systemwide approach to transportation planning and funding that focuses on all modes is increasingly important to meet expectations for more efficient freight transport, growing freight demand, and more connections between modes. Federal funding constraints enhance the need for a strategic federal approach to freight infrastructure investment, and the implications of these constraints are a critical feature of a national freight policy. Given major projected demographic shifts and future federal health and retirement commitments, federal revenues may barely cover interest on the federal debt by 2040—leaving no money for either mandatory or discretionary programs. According to our simulations, balancing the budget could require cutting federal spending by as much as 60 percent, raising taxes by up to 2-1/2 times their current level, or some combination of the two. We have concluded that the impending federal fiscal crisis will require a fundamental reexamination of all federal programs. For example, our assessment of the federal highway grant program raised significant issues, such as the absence of a clear federal mission and role since the completion of the interstate highway system and the absence of a link between federal funding and goals or outcome measures. DOT has taken an important step toward a more comprehensive freight strategy by publishing a draft Framework for a National Freight Policy for comment. It is a step for which we found considerable support among public and private freight stakeholders. A systemwide, rather than a modal, perspective is critical to a national freight policy. As the AASHTO study emphasized, investments at the freight system level are needed to respond to nationally significant corridor choke points, intermodal connections, and urban rail interchanges. With federal fiscal constraints as the backdrop, two major policy principles will need to be considered as DOT continues to develop this national policy. These principles are, first, to adopt a mode-neutral approach—one that takes a consistent policy and funding approach to all modes and establishes a level playing field for competition in the freight marketplace—and, second, to maximize public benefits—particularly benefits to the national transportation system—from public transportation investments. Under a mode-neutral approach, each mode would pay the full costs for the infrastructure facilities and services that it used as well as the costs that its use imposed on others—such as added air pollution, congestion, and accident risks—through taxes and user fees. No single mode would be at a competitive disadvantage. A mode-neutral federal freight policy and investment strategy would be consistent with the competitive market’s central role in the freight system. Encouraging a market-based approach and competition that fosters economic efficiency and innovation is a key consideration in dealing with the privately owned freight rail industry, as we have reported. Currently, as we have pointed out, federal programs treat different freight modes differently. For example, trucks and barges use infrastructure that is owned and maintained by the government, while rail companies use infrastructure that they pay to own and maintain. The trucking and barge industries pay fees and taxes to use this government-funded infrastructure, but their payments generally do not cover the costs they impose on highways and waterways, thereby giving the trucking and barge industries a competitive price advantage over railroads. The most recent Federal Highway Administration (FHWA) highway cost allocation study evaluates highway costs attributable to different vehicle classes and the extent to which their user fees cover their responsibility for highway costs. According to the study, combination unit trucks paid 80 percent of their cost responsibility and the heaviest combinations paid half of their cost responsibility. The study concluded that only the very lightest combination trucks pay their share of federal highway cost responsibility. A recent CBO report also concluded that trucks and barges do not pay their full share of highway costs and reported that rail may be at a competitive disadvantage, since other modes are effectively being subsidized. CBO also observed that if all modes do not pay their full costs, the result is inefficient use of roads and waterways and greater government spending than otherwise would be necessary if capacity investments are made in anticipation of demand that does not occur. As DOT develops and applies a national freight policy, our second critical principle will be an important consideration—public investments should depend on clearly defined public benefits. Benefit-cost analysis can be a useful tool to define benefits, as our expert panel on this subject concluded. Because this analysis identifies the greatest net benefits by comparing the monetary value of each project’s benefits and costs, it can help public and private stakeholders evaluate project alternatives. States have had experience in evaluating whether rail projects could yield sufficient public benefits to warrant investments of public dollars in the projects, and their experience can inform a national freight policy. For example, the state of Washington’s Freight Mobility Strategic Investment Board leverages transportation dollars by working with public and private stakeholders to fund projects that deliver public benefits. The board’s project scoring criteria reflect anticipated benefits, such as freight mobility for the project area; freight mobility for the region, state, and nation; general mobility; safety; freight and economic value; environment; project partnership; consistency with regional and state plans; location on a Strategic Freight Corridor; and cost benefit. However, federal decision makers have no such criteria to use in considering potential freight rail investments. As we have pointed out, the federal funding structure for surface transportation and federal program incentives tend to focus decision makers’ attention on highway and transit projects, rather than on freight or freight rail concerns. And, although state and local transportation decision makers consider benefit-cost analyses, these analyses often do not have a decisive impact on investment decisions. As DOT has noted, a fair, balanced approach to allocating public and private funding is a prerequisite for public-private partnerships. We have also raised concerns about federal tax policies. For railroads, some industry groups have proposed freight rail tax credits to encourage investment. However, our work has shown that it is difficult to target tax credits to the desired activities and outcomes and ensure that tax credits generate the desired new investments, as opposed to substituting for investment that would have occurred anyway. The Staggers Rail Act achieved far-ranging benefits in helping to create and sustain a healthy and vibrant freight railroad industry, as well as an efficient rail transportation system that supports the important role freight plays in the nation’s economy. Critical to the Staggers Rail Act was the concept of balance—on one hand, the act sought to allow rail carriers to earn adequate revenues so that they could meet their current and future capital needs. On the other hand, the act recognized the need for a remnant regulatory regime that would maintain reasonable rates and prohibit undue concentrations of market power in areas where no effective competition existed. The act recognized that it was vital for the federal government to promote competition and rely on it to set rates. Without a doubt, rates have decreased for most shippers, and most shippers are better off in the post-Staggers environment than they were previously. This outcome suggests that widespread and fundamental changes to the relationship between the railroads and their customers are not needed. Nevertheless, the evidence also suggests some basis for believing that—more than 25 years after the act’s passage—the balance it envisioned has not been fully achieved. The continued existence of pockets of potential captivity, together with the increase in traffic at higher thresholds, at a time when the railroads are, for the first time in decades, experiencing increasing economic health, raises the question whether rail rates in selected markets reflect justified and reasonable pricing practices, or an abuse of market power by the railroads. Answering this question requires a rigorous, national analysis of competitive markets. Our analysis provides an important first step; however, we are constrained by the inherent limitations of the Carload Waybill Sample and the available proxy measures for assessing captivity. In contrast, STB has the statutory authority to inquire into and report on railroad practices and could conduct a rigorous analysis of competition in the freight rail industry that would rely on more than sample data and could determine whether the inappropriate exercise of market power is occuring in specific markets. Should STB find evidence of abuse, it could consider several methods for creating the balance envisioned by the Staggers Rail Act. For example, STB could consider initiating a generally applicable rule making to address competition issues or prescribe specific remedies in response to a complaint. In assessing competition within the freight rail industry, STB needs accurate data on railroad revenues. The data that STB currently collects— in particular, the use of the Carload Waybill Sample to report on the railroads’ finances—are not always captured consistently, making it difficult to accurately track railroad revenues. Specifically, while we determined that, in general, the data in the Waybill were suitably reliable for our reporting purposes, we also found that some data, including data on fuel surcharges, were not accurately captured. Accurate data would provide for more accurate tracking of railroad revenues and railroad charges to potentially captive shippers and other shippers. This information would help STB to obtain a clearer picture of the actual fees paid by shippers. STB is also responsible for ensuring the expeditious handling and resolution of rate disputes, but the current process for settling these disputes is ineffective. There are a number of potential alternatives to the current process, and STB has recognized the limits of the process and taken further action to improve it. These actions are commendable and need to be pursued; absent further action, the promise of the Staggers Rail Act and the balance it envisioned may never be fully realized. These are difficult issues that require careful balancing of the railroads’ need to earn adequate revenues with shippers’ need for competition and reasonable rates during a time of uncertainty about the capacity of freight railroads to meet future demand for freight rail service. While predictions and scenarios for the future of freight rail vary, it is likely that multiple levels of government will continue to be involved in the nation’s freight system. Additional investment in freight rail infrastructure can produce public benefits, and many state and local governments are involved in freight rail infrastructure projects. Congress has provided federal assistance as well, and further requests for and decisions about federal assistance to rail infrastructure are likely. Decision makers will be challenged to ensure that federal involvement is consistent with competition in the freight marketplace, reflects widespread public priorities, and offers benefits that warrant the commitment of federal funds. DOT’s draft National Freight Policy represents a good start in this direction. To ensure an appropriate balance between the interests of railroads and shippers, we recommend that the Chairman of the Surface Transportation Board take the following two actions: Undertake a rigorous analysis of competitive markets to identify the state of competition nationwide; in specific markets, determine whether the inappropriate exercise of market power is occuring; and, where appropriate, consider the range of actions available to address problems associated with the potential abuse of market power. If the Chairman determines that STB requires more resources to conduct this analysis, then STB should request additional resources from Congress. Review STB’s method of data collection to ensure that all freight railroads are consistently and accurately reporting all revenues collected from shippers, including fuel surcharges and other costs not explicitly captured in all railroad rate structures. To ensure the efficiency and effectiveness of our nation’s freight system, we are making the following recommendation to the Secretary of Transportation: As DOT continues to develop a national freight policy and a possible federal policy response, consider strategies to (1) sustain the role of competitive market forces by creating a level playing field for all freight modes and (2) recognize the fiscally constrained federal funding environment by developing mechanisms to assess and maximize public benefits from federally financed freight transportation investments. STB provided written comments on a draft of this report. These comments are presented and evaluated in appendix III. STB generally agreed with our assessment of the improving financial health of the freight railroad industry and potential public benefits for freight rail infrastructure projects. However, STB disagreed with our recommendation to undertake a rigorous analysis of competitive markets in the rail industry because it believed the findings underlying the recommendation were inconclusive, their on-going efforts will address many of our concerns, and a rigorous analysis would divert resources from other efforts. Specifically, STB stated that our recommendation was based on two findings—first, that rail rates have increased for some shippers and, second, that the amount of traffic with rates reflecting high R/VC ratios has increased in some areas. STB stated that recent increases in rail rates are not surprising and that R/VC ratios can increase when rates and costs are falling and that these findings do not suggest market abuses. STB also noted that it has several rule makings under way related to the standard rate relief process and the simplified rate relief process. STB suggested that, given the limitations on its resources and the aggressive agenda already under way, rather than undertake this competitive markets analysis, a more practical approach would be for STB to finish its reforms to ensure that captive shippers have an effective forum to seek rate relief if a railroad is charging unreasonable rates. Concerning our recommendation that STB review its method of data collection to ensure that all freight railroads are consistently and accurately reporting all revenues collected from shippers, STB stated that the revenue in question represents a small portion of all revenues and that revenue data submitted by freight railroads are audited and otherwise checked to ensure quality. Furthermore, STB has initiated a rule making to improve the tracking of fuel surcharges. While STB’s efforts have been helpful, we continue to believe that STB should undertake a rigorous analysis of competitive markets to identify the state of competition nationwide; in specific markets, determine whether the inappropriate exercise of market power is occuring; and, where appropriate, consider the range of actions available to address problems associated with the potential abuse of market power. STB’s comments do not accurately characterize the underlying support for our recommendation. We did not base this recommendation on an increase in rail rates or suggest that rate increases alone suggest increased captivity. On the contrary, we recognize that rates have declined and that available measures suggest that the extent of captivity has dropped. Furthermore, STB’s response suggests that rail rates and the amount of traffic with high R/VC ratios were the only data we examined—they were not. We examined several factors, including data on the amount of tonnage originating in economic areas that have access to only one Class I railroad, data on the amount of tonnage traveling over 300 percent R/VC, and the amount of tonnage that originates in areas with access to only one Class I railroad and travels at rates that exceed the statutory threshold for rate relief. Our report explicitly acknowledges the limitations in the Carload Waybill Sample and of the proxy measures available for weighing captivity, including R/VC levels. At the same time, our analyses, when combined with comments from participants on our expert panel and interviews with shipper and railroad groups, suggest a reasonable possibility that shippers in selective markets may be paying excessive rates related to a lack of competition. This provides the impetus for STB— which has the statutory authority to inquire into and report on railroad practices—to analyze competitive markets in the rail industry and, where appropriate, consider the range of actions to address problems associated with the potential abuse of market power. Also, this analysis would rely on more than sample data and could analyze the exercise of market power in specific markets. Regarding STB’s position that it has several rule makings under way that address many of our concerns, we commend STB for recognizing and taking action to address problems with the rate relief process, but we believe action is needed beyond improvements to the rate relief process. These rule makings, if implemented, are designed to improve the processes available to shippers, after shippers have been charged a rate that they consider to be unreasonable. In contrast, we believe that an analysis of the state of competition and the possible abuse of market power, along with the range of options STB has to address competition issues, could more directly further legislatively defined goals to ensure effective competition among rail carriers as the preferred means to both promoting a sound rail transportation system and maintaining reasonable rates. Regarding STB’s assertion that conducting a rigorous analysis of competition would divert resources away from its on-going initiatives, we modified our draft to recommend that STB request additional resources from Congress if it determines it needs more resources to conduct an analysis of competition. We also believe that STB should review its method of data collection to ensure that all freight railroads are consistently and accurately reporting all revenues. STB commented that it had already responded to this concern by proposing a standardized report for fuel surcharges; however, while we commend STB for its efforts to capture these data, we also note STB has not yet implemented standardized reporting of fuel surcharges and that other revenues besides fuel surcharges may not be included in the Waybill. STB also provided technical comments that we incorporated in this report, as appropriate. We requested comments on a draft of this report from the Acting Secretary of Transportation or her representative. On September 21, 2006, DOT officials, including the Deputy Associate Administrator for Policy, Federal Railroad Administration, and the Chief Economist, Office of Transportation Policy, Office of the Secretary, provided us with oral comments on the draft. In its comments, DOT emphasized the need for the report to clearly recognize the rationale and importance of differential pricing; the nature and relatively small extent of potentially unreasonable pricing in the rail freight marketplace; and the impact of capacity constraints on rail pricing and services. DOT also suggested that our report should recognize certain factors, including that competition between railroads is not possible in all markets because the level of demand may not support more than one railroad, and that investment in freight rail infrastructure entails substantial private risk. In contrast, highway investment has been largely publicly financed. DOT did not take a position on our recommendation concerning the draft National Freight Policy, but stated that efforts are under way to develop more effective tools for gauging the extent to which proposed freight investments provide public benefits. DOT also endorsed the views contained in STB’s September 15, 2006, letter (see app. III). We made changes to this report to reflect DOT’s comments, as appropriate. DOT also provided a number of technical corrections, which we incorporated as appropriate. We will send copies to the appropriate congressional committees, the Chair and Vice-Chairs of the Surface Transportation Board, and the Secretary of Transportation. We will also make copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff has any questions, please contact me at (202) 512-2834 or heckerj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. See appendix V for a list of major contributors to this report. Louis S. Thompson (Moderator) Principal Thompson, Galenson and Associates, LLC George Borts Department of Economics Brown University George Eads Vice President CRA International Robert Gallamore Director Transportation Center Northwestern University Darius Gaskins Founding Partner Norbridge, Inc. We used the Surface Transportation Board’s (STB) Carload Waybill Sample to identify railroad rates from 1985 through 2004 (the latest rate data available at the time of our review), which we then analyzed to determine rate changes. The Carload Waybill Sample is a sample of railroad waybills (in general, documents prepared from bills of lading authorizing railroads to move shipments and collect freight charges) submitted by railroads annually. We used these data to obtain information on rail rates across the industry, for certain commodities and for certain routes by shipment size and length of haul. According to STB officials, revenues derived from the Carload Waybill Sample are not adjusted for such things as year-end rebates and refunds that may be provided by railroads to shippers that exceed certain volume commitments. Some railroad movements contained in the Carload Waybill Sample are governed by contracts between shippers and railroads. To avoid disclosure of confidential business information, STB disguises the revenues associated with these movements before making this information available to the public. Consistent with our statutory authority to obtain agency records, we obtained a version of the Carload Waybill Sample that did not disguise revenues associated with railroad movements made under contract. Therefore, the rate analysis presented in this report presents a truer picture of rail rate trends than analyses that may be based solely on publicly available information. Since much of the information contained in the Carload Waybill Sample is confidential, rail rates and other data contained in this report that were derived from this database have been aggregated at a level sufficient to protect this confidentiality. We used rate indexes and average rates to measure rate changes over time. A rate index attempts to measure price changes over time by holding constant the underlying collection of items that are consumed (in the context of this report, items shipped). This approach differs from comparing average rates in each year because, over time, higher- or lower- priced items can constitute different shares of the items consumed. Comparing average rates can confuse changes in prices with changes in the composition of the goods consumed. In the context of railroad transportation, rail rates and revenues per ton-mile are influenced, among other things, by the average length of haul. Therefore, comparisons of average rates over time can be influenced by changes in the mix of long- and short-haul traffic. Our rate indexes attempted to control for the distance factor by defining the underlying traffic as 2004 commodity flows between pairs of census regions. To examine the rate trends on specific traffic corridors, we first chose a level of geographic aggregation for corridor end points. We defined end points as the regional economic areas defined by the Department of Commerce’s Bureau of Economic Analysis. An economic area is a collection of counties in and about a metropolitan area (or other center of economic activity); there are 179 economic areas in the United States, and each of the nation’s 3,141 counties is included in an economic area. We placed each corridor in one of three distance- related categories: 0 to 500 miles, 501 to 1,000 miles, and more than 1,000 miles. Although these distance categories are somewhat arbitrary, they represent reasonable proxies for short-, medium-, and long-distance shipments by rail. To determine the areas with access to one or more Class I railroads, we obtained railroad systems data from the Department of Transportation, which accounted for trackage rights, mergers, and other industry developments affecting access. For issues related to revenue-to-variable cost ratios, we used data from the Carload Waybill Sample to identify the specific revenues and variable costs and to compute R/VC ratios for the commodities and markets we examined. Using this information, we then identified those commodities and areas whose R/VC ratios were above or below the 180 percent R/VC level, as well as those areas above the 300 percent R/VC level. To identify the actions STB has taken to address competition and captivity concerns, we interviewed officials and reviewed information from all seven North American Class I railroads, several shipper groups and associations and STB officials; and we met with experts in the railroad industry. We reviewed characteristics of STB’s current rate relief process, as well as changes STB has made to the process, and conducted a comprehensive analysis of STB cases since 2000. We also held an expert panel through the National Academy of Sciences, consisting of 11 individuals with expertise in the freight railroad industry and the economics of transportation deregulation. Moreover, we conducted a legal analysis of current statutes related to STB’s authority. To discern potential alternatives, we reviewed pending legislation, testimonies before Congress, previous GAO reports, STB decisions, rule makings, and proposed rule makings, and conducted a summary analysis of interviews. To assess future freight demand and the freight railroad industry’s ability to meet such demand, we reviewed transportation planning literature and forecasts of future freight rail demand and capacity in the United States. This review also included state freight plans and major freight rail projects. We synthesized information on freight and freight rail, as well as various forecasts to identify similar and dissimilar themes. We also reviewed involvement by the federal government in freight railroad projects, including related legislation and funding decisions. We interviewed several state and federal transportation officials to gather further information on public-private partnerships, freight railroad projects, and DOT’s draft National Freight Policy. We also interviewed freight railroad representatives, financial market analysts, national association representatives, and transportation experts. For selected public-private partnerships, we analyzed the genesis of such projects, motivations for involvement from the public and private sectors, and benefit-cost analyses that were conducted to support project funding decisions. We determined that the data used in this report were sufficiently reliable for the purpose of our review. We conducted our review from June 2005 to August 2006 in accordance with generally accepted government auditing standards. The following are GAO’s comments on the Surface Transportation Board’s letter dated September 15, 2006. 1. STB commented that we conducted a national study into the state of competition. We did not conduct such a study. Our study included a broad focus on changes in the freight railroad industry since the Staggers Rail Act, the actions STB has taken to address concerns about competition and captivity, and future freight demand and capacity. The data we collected and analysis we performed—such as a review of rate changes over 20 years—were too broad to represent a national study of the state of competition. It is the limitations in the scope of our analysis of competition, along with limitations in the data available to us and a reasonable possibility that shippers in selected markets may be paying excessive rates, which led us to recommend that STB conduct a more rigorous analysis of competition. 2. STB commented that it has already addressed our recommendation to improve data collection by proposing standardized monthly reports of fuel surcharges and also described its efforts to ensure the accuracy and reliability of data in the Waybill. We commend STB for its recent action on fuel surcharges, which occurred during our review, but we also note STB has not yet implemented standardized reporting of fuel surcharges. In addition, other revenues besides fuel surcharges may not be included in the Waybill. Specifically, revenues generated through railcar auctions and congestion fees may not be included. While the reported miscellaneous revenue is a small percentage of all revenue, it is not known how much miscellaneous revenue is not reported. Complete data would provide for more accurate tracking of railroad revenues and would help STB to obtain a clearer picture of actual fees paid by shippers. While we commend STB for its actions to audit and review Waybill data, these accuracy checks do not address our concern that STB is not collecting the full range of revenue data. 3. STB commented that our recommendation for STB to conduct an analysis of competition is based on two findings—that rail rates have increased since 1980 and that the amount of traffic with high R/VC ratios has increased in some areas. Our recommendation is not based on these two findings, but on an analysis of multiple sources, such as data on the amount of tonnage originating in economic areas that have access to only one Class I railroad, data on the amount of tonnage traveling over 300 percent R/VC, and the amount of tonnage that originates in areas with access to only one Class I railroad and travels at rates that exceed the statutory threshold for rate relief. This analysis provides an important first step in assessing competitive markets nationally; but it is imperfect, given the limitations of measures used to weigh captivity and limitations in the Carload Waybill Sample. The results of our analysis, when combined with comments from participants on our expert panel and interviews with shipper and railroad groups, suggest a reasonable possibility that shippers in selective markets may be paying excessive rates related to a lack of competition in these markets. It is precisely the inconclusiveness of the available data—and STB’s authority and responsibility to monitor and ensure effective competition in the freight rail industry—that led us to recommend a rigorous analysis of competition by STB. Also, we examined rates since 1985, not 1980. 4. STB commented that an increase in rates does not suggest market abuses and that the rate changes in our report were not adjusted for inflation. We agree that a change in a rate does not necessarily suggest the exercise of market power. While our rates were not adjusted for inflation, we constructed rate indexes, which account for changes in traffic patterns over time that could affect revenue statistics. We also included the price index for the GDP to provide a measure for inflation. However, our recommendation is not based on recent rate increases. Our recommendation is based on our analyses of multiple sources, such as data on the amount of tonnage originating in economic areas that have access to only one Class I railroad, data on the amount of tonnage traveling over 300 percent R/VC, and the amount of tonnage that originates in areas with access to only one Class I railroad and travels at rates that exceed the statutory threshold for rate relief. 5. STB commented that figure 19 shows an increase in grain traffic which traveled at rates above 300 percent R/VC and figure 9 shows that grain rates per ton-mile had fallen along that same route, so the change in R/VC must be due to a drop in costs per ton-mile. We disagree that the change in R/VC in figure 19 must be due to a drop in costs per ton-mile. Figure 19 shows only the amount of traffic on the route that traveled at rates above 300 percent R/VC, while figure 9 shows the cents per ton- mile for all traffic along that route (not just traffic that traveled at rates above 300 percent R/VC). Therefore, the decrease in cents per ton mile shown in figure 9 may reflect a decrease in rates for traffic along that route that traveled at rates below 300 percent R/VC. 6. STB commented that the measures used in our analysis are not conclusive. The fact that our analysis is inherently limited by available data and proxy measures lends more weight to our recommendation. Specifically, our analysis provides an important first step in assessing competitive markets nationally, but it is imperfect given the limitations of measures used to weigh captivity and limitations in the Carload Waybill Sample. We do not conclusively state that there are shippers who are captive to one railroad and paying rates that reflect an abuse of market power. However, the results of our analysis, when combined with comments from participants on our expert panel and interviews with shipper and railroad groups, suggest a reasonable possibility that shippers in selective markets may be paying excessive rates related to a lack of competition in these markets. We believe that STB is the agency that has the authority and responsibility to conduct an inquiry into the potential abuse of market power and utilize its range of options to address competition issues. 7. STB commented that R/VC levels do not provide a reliable measure of changes in captivity because they can increase when rates are falling. We agree that an analysis of R/VC levels is not a conclusive measure of the use of market power. However, the use of R/VC as an indicator of railroad pricing power is well-documented both by Congress in the Staggers Rail Act and by STB, which uses R/VC levels in its process for determining unreasonable rates. While we acknowledge the limitations of the ratio in our report, and even include an example like the one cited above, we believe that R/VC ratios can be used as one of several proxy measure to determine potential captivity. In fact, STB refers to traffic traveling at or above 180 percent R/VC as “potentially captive.” 8. STB commented that they have several important rule makings under way which bear directly on our concerns, including changes to the standard and simplified rate relief processes. While we commend STB for taking action to improve its rate relief processes, we note that these rule makings are designed to make changes to the standard and simplified rate relief processes and are not designed to analyze the state of competition or the possible abuse of market power. In contrast, we believe that an analysis of the state of competition or the possible abuse of market power, along with the range of options STB has to address competition issues, could more directly further legislatively defined goals to ensure effective competition among rail carriers as the preferred means to both promoting a sound rail transportation system and maintaining reasonable rates. 9. STB commented that it is hesitant to divert resources away from its pending initiatives to respond to our recommendation. We have modified our draft to recommend that, if STB determined that it needs more resources to undertake a rigorous analysis of competitive markets to identify the state of competition nationwide, it should request additional resources from Congress. 10. STB commented that, as a small agency, a more practical approach to addressing concerns about captive shippers would be for STB to continue reforming its rate complaint procedures, rather than conduct another analysis. While we commend STB for continuing its efforts to improve its standard and simplified rate relief processes, these rule makings will not address our concerns. Specifically, these rule makings are designed to improve processes available to shippers after they have been charged a rate they consider to be unreasonable; these rule makings are not designed to analyze the state of competition or the possible abuse of market power. In contrast, we believe that an analysis of the state of competition or the possible abuse of market power, along with the range of options STB has to address competition issues, could more directly further legislatively defined goals to ensure effective competition among rail carriers as the preferred means to both promoting a sound rail transportation system and maintaining reasonable rates. We believe that STB is the agency that is uniquely positioned to inquire into and report on railroad practices and could conduct an analysis of competition that would rely on more than sample data and could determine whether the inappropriate exercise of market power is occuring in specific markets. STB has the authority to subpoena witnesses and records. Following its inquiry, STB could also consider initiating a generally applicable rule making to address competition issues or prescribe specific remedies in response to a complaint. We recognize that STB has limited resources, and we have modified our draft to recommend that, if STB determines that it needs more resources to conduct an analysis of competition, it should request additional resources from Congress. In addition to those named above, individuals making key contributions to this report include Ashley Alley, Steve Brown, Matthew T. Cail, Sheranda S. Campbell, Steve Cohen, Elizabeth Eisenstadt, Libby Halperin, Richard Jorgenson, Tom McCool, John Mingus, Josh Ormond, and John W. Shumann. Freight Railroads: Preliminary Observations on Rates, Competition, and Capacity Issues. GAO-06-898T. Washington, D.C.: June 21, 2006. Freight Transportation: Short Sea Shipping Option Shows Importance of Systematic Approach to Public Investment Decisions. GAO-05-768. Washington, D.C.: July 29, 2005. Freight Transportation: Strategies Needed to Address Planning and Financing Limitations. GAO-04-165. Washington, D.C.: December 19, 2003. Railroad Regulation: Changes in Freight Railroad Rates from 1997 through 2000. GAO-02-524. Washington, D.C.: June 7, 2002. Freight Railroad Regulation: Surface Transportation Board’s Oversight Could Benefit from Evidence Better Identifying How Mergers Affect Rates. GAO-01-689. Washington, D.C.: July 5, 2001. Railroad Regulation: Current Issues Associated with the Rate Relief Process. GAO/RCED-99-46. Washington, D.C.: April 29, 1999. Railroad Regulation: Changes in Railroad Rates and Service Quality Since 1990. GAO/RCED-99-93. Washington, D.C.: April 6, 1999. Interstate Commerce Commission: Key Issues Need to Be Addressed in Determining Future of ICC’s Regulatory Functions. GAO-T-RCED-94-261 Washington, D.C.: July 12, 1994. Railroad Competitiveness: Federal Laws and Policies Affect Railroad Competitiveness. GAO/RCED-92-16. Washington, D.C.: November 5, 1991. Railroad Regulation: Economic and Financial Impacts of the Staggers Rail Act of 1980. GAO/RCED-90-80. Washington, D.C.: May 16, 1990. Railroad Regulation: Shipper Experiences and Current Issues in ICC Regulation of Rail Rates. GAO/RCED-87-119. Washington, D.C.: September 9, 1987. Railroad Regulation: Competitive Access and Its Effects on Selected Railroads and Shippers. GAO/RCED-87-109, Washington, D.C.: June 18, 1987. Railroad Revenues: Analysis of Alternative Methods to Measure Revenue Adequacy. GAO/RCED-87-15BR. Washington, D.C.: October 2, 1986. Shipper Rail Rates: Interstate Commerce Commission’s Handling of Complaints. GAO/RCED-86-54FS. Washington, D.C.: January 30, 1986. | The Staggers Rail Act deregulated the freight rail industry, relying on competition to set rates, and allowed for differential pricing (charging higher rates to those more dependent on rail). The act gave the Surface Transportation Board (STB) authority to develop remedies for shippers "captive" to one railroad and set a threshold for shippers to apply for rate relief. GAO was asked to review (1) changes in the railroad industry since the Staggers Rail Act, including rates and competition; (2) STB actions to address competition and captivity concerns and alternatives that could be considered; and (3) freight demand and capacity projections and potential federal policy responses. GAO examined STB data, conducted interviews, and held an expert panel. Changes in the railroad industry since the Staggers Rail Act are widely viewed as positive, as the industry's financial health has improved and most rates have declined; however, concerns over competition and captivity remain. Rail rates generally declined between 1985 and 2000, then increased slightly from 2001 through 2004. Concerns about competition and captivity remain as traffic is concentrated in fewer railroads. It is difficult to determine the number of "captive" shippers as proxy measures can overstate or understate captivity. Nevertheless, GAO's analysis of limited available measures indicates that the extent of captivity appears to be dropping, but the percentage of traffic traveling at rates substantially over the threshold for rate relief has increased. Also, some areas with access to only one major railroad have higher percentages of traffic traveling at rates above the threshold. These findings may reflect reasonable economic practices by the railroads or a possible abuse of market power. GAO's analysis is limited by available data and proxy measures but suggests that shippers in selected markets may be paying excessive rates, meriting further inquiry and analysis. While STB has taken action, further efforts to improve its rate relief processes and assess competition could help address competition and captivity concerns and inform the merits of proposed alternative approaches. STB's rate relief processes are largely inaccessible and rarely used. STB recognizes this and is taking steps to improve its processes. STB has broad statutory authority to inquire into and report on railroad industry practices and, given a reasonable possibility that some shippers may be paying excessive rates, an assessment of competition could determine whether there is sufficient evidence that market power is being abused in specific markets. While competition between railroads may not always be feasible, alternative approaches have costs and benefits that should be carefully considered to ensure the balance envisioned in the Staggers Rail Act--including the railroads' need for adequate revenues. Significant increases in freight traffic are forecast, and the industry's ability to meet them is largely uncertain. Investments in rail projects can produce public benefits, such as reducing highway congestion. As a result, federal and state governments have increasingly participated in freight rail projects. In 2005, for example, Congress provided $100 million for rail improvements in the Chicago area. Congress faces additional decisions about potential federal policy responses in years ahead. Responses should recognize that the freight transportation system includes many modes that are treated differently by the federal government and functions in a competitive marketplace and a constrained federal funding environment. In developing a National Freight Policy, the Department of Transportation (DOT)has made a good start by providing context for those decisions and DOT can help sustain the role of the competitive marketplace through strategies that promote a level playing field for freight transportation decision making and acknowledge the constrained federal fiscal environment by focusing federal involvement where demonstrable, wide-ranging public benefits exist. |
The public faces the risk that critical services could be severely disrupted by the Year 2000 computing crisis. Financial transactions could be delayed, airline flights grounded, and national defense affected. The many interdependencies that exist among the levels of governments and within key economic sectors of our nation could cause a single failure to have wide-ranging repercussions. While managers in the government and the private sector are acting to mitigate these risks, a significant amount of work remains. The federal government is extremely vulnerable to the Year 2000 issue due to its widespread dependence on computer systems to process financial transactions, deliver vital public services, and carry out its operations. This challenge is made more difficult by the age and poor documentation of many of the government’s existing systems and its lackluster track record in modernizing systems to deliver expected improvements and meet promised deadlines. Year 2000-related problems have already occurred. For example, an automated Defense Logistics Agency system erroneously deactivated 90,000 inventoried items as the result of an incorrect date calculation. According to the agency, if the problem had not been corrected (which took 400 work hours), the impact would have seriously hampered its mission to deliver materiel in a timely manner. Our reviews of federal agency Year 2000 programs have found uneven progress, and our reports contain numerous recommendations, which the agencies have almost universally agreed to implement. Among them are the need to establish priorities, solidify data exchange agreements, and develop contingency plans. One of the largest, and largely unknown, risks relates to the global nature of the problem. With the advent of electronic communication and international commerce, the United States and the rest of the world have become critically dependent on computers. However, with this electronic dependence and massive exchanging of data comes increasing risk that uncorrected Year 2000 problems in other countries will adversely affect the United States. And there are indications of Year 2000 readiness problems internationally. In September 1997, the Gartner Group, a private research firm acknowledged for its expertise in Year 2000 computing issues, surveyed 2,400 companies in 17 countries and concluded that “hirty percent of all companies have not started dealing with the year 2000 problem.” As 2000 approaches, the scope of the risks that the century change could bring has become more clear, and the federal government’s actions have intensified. This past February, an executive order was issued establishing the President’s Council on Year 2000 Conversion. The Council Chair is to oversee federal agency Year 2000 efforts as well as be the spokesman in national and international forums, coordinate with state and local governments, promote appropriate federal roles with respect to private-sector activities, and report to the President on a quarterly basis. As we testified last month, there are a number of actions we believe the Council must take to avert this crisis. We plan to issue a report later this month detailing our specific recommendations. The following summarizes a few of the key areas in which we will be recommending action. Because departments and agencies have taken longer than recommended to assess the readiness of their systems, it is unlikely that they will be able to renovate and fully test all mission-critical systems by January 1, 2000. Consequently, setting priorities is essential, with the focus being on systems most critical to our health and safety, financial well being, national security, or the economy. Agencies must start business continuity and contingency planning now to safeguard their ability to deliver a minimum acceptable level of services in the event of Year 2000-induced failures. Last month, we issued an exposure draft of a guide providing information on business continuity and contingency planning issues common to most large enterprises.Agencies developing such plans only for systems currently behind schedule, however, are not addressing the need to ensure business continuity in the event of unforeseen failures. Further, such plans should not be limited to the risks posed by the Year 2000-induced failures of internal information systems, but must include the potential Year 2000 failures of others, including business partners and infrastructure service providers. The Office of Management and Budget’s (OMB) assessment of the current status of federal Year 2000 progress is predominantly based on agency reports that have not been consistently verified or independently reviewed. Without such independent reviews, OMB and the President’s Council on Year 2000 Conversion have little assurance that they are receiving accurate information. Accordingly, agencies must have independent verification strategies involving inspectors general or other independent organizations. As a nation, we do not know where we stand with regard to Year 2000 risks and readiness. No nationwide assessment—including the private and public sectors—has been undertaken to gauge this. In partnership with the private sector and state and local governments, the President’s Council could orchestrate such an assessment. Ensuring that information systems are made Year 2000 compliant is an enormous, difficult, and time-consuming challenge for a large organization such as the Department of the Interior. Interior’s systems support a wide range of programs; unless they can function into the next century, the department is at risk of being unable to effectively or efficiently carry out its critical missions. As the nation’s principal conservation agency, Interior has responsibility for managing most of our nationally owned public lands and natural resources, protecting our fish and wildlife, and preserving the environmental and cultural values of our national parks and historic places. The department’s core business processes could fail—in whole or in part—if supporting information systems are not made Year 2000 compliant in time. These include systems that account for and disburse mineral royalties of about $300 million each support the management of the nation’s lands and mineral resources, account for and maintain records on over $2.5 billion of American Indian trust fund assets, and detect and analyze ground motion and provide early warnings of earthquakes. A detailed example of this kind of risk can be seen in recent work we performed for the House Committee on Appropriations, Subcommittee on Interior and Related Agencies, where we concluded that recent and potential future delays in the Bureau of Land Management’s (BLM) Automated Land and Mineral Record System (ALMRS) introduce the risk that BLM will lose information systems support for some core business processes. Two systems that ALMRS is scheduled to replace, the Case Recordation System and the Mining Claim Recordation System, are currently not Year 2000 compliant. BLM uses these two systems to create and manage land and mineral case files. They capture and provide information on case type, customer, authorizations, and legal descriptions. Without these systems, BLM cannot create and record new cases, such as mining claims, or update case information. Delays in implementing ALMRS introduce the risk that BLM will be forced to continue using these two systems beyond 2000. To mitigate this risk, BLM has begun planning to ensure that these two systems can run in 2000 and beyond, if necessary. BLM has not yet, however, completed its assessment to determine what specific actions are needed to accomplish this, nor has it developed a contingency plan to ensure the continuity of core business processes in the event that ALMRS is not fully deployed by 2000. In a draft report to be released soon, we are recommending that BLM assess the systems to be replaced by ALMRS to determine what actions are needed to ensure their continued use after January 1, 2000, and develop a contingency plan should ALMRS not be fully and successfully deployed in time. Interior officials have stated that they recognize the importance of ensuring that their systems are Year 2000 compliant. The Secretary has said that identifying and correcting Year 2000 computer problems is a priority, and the former Chief Information Officer called this challenge one of the most serious operational and administrative problems the department has ever faced. In assessing the magnitude of the problem, the department’s bureaus and offices identified 95 mission-critical systems,with a total of about 18 million lines of software code, all of which must be examined. Interior estimates that correcting these 95 systems will cost $17.3 million, as shown in the following table. In addition to these systems, the department is also assessing its communications systems and embedded computer chip technologies to determine whether they will be affected by the coming century change. Embedded systems are special-purpose computers built into other devices. Many facilities used by the federal government that were built or renovated within the last 20 years contain embedded computer systems to control, monitor, or assist in operations. If the embedded chips used in such devices contain two-digit date fields for year representation, the devices could malfunction. For example, control systems that regulate water flow and generators in our nation’s dams, which produce over 42 billion kilowatts of energy each year, could fail. Interior’s Year 2000 program operates in a decentralized fashion as its bureaus and offices are responsible for identifying and assessing their mission-critical systems, determining correction priorities, and making their own mission-critical systems Year 2000 compliant. Departmental oversight is provided by Interior’s Year 2000 Project Office. This office reports directly to the Chief Information Officer. The Year 2000 Project Team consists of a Year 2000 coordinator from the department and a representative located in each bureau or office. The bureaus and offices maintain information used to manage their Year 2000 activities. Bureau and office representatives submit monthly milestone and status information to the coordinator, which he analyzes and compiles manually. The coordinator tracks major milestones, such as systems assessments completed, Year 2000 renovations completed, and systems implemented. The information is forwarded to the Chief Information Officer and, each quarter, to OMB. According to Interior’s Year 2000 coordinator, he tracks the 95 mission-critical systems and maintains status information in a word processing table that lacks the capability for automated tracking or analysis. He stated that he notifies the Chief Information Officer of any reported milestone delays, which are then discussed at senior-level management meetings. Table 2 shows the status of the 67 mission-critical systems that are being renovated, as reported to OMB on February 15, 1998. (This table does not include the other 28 mission-critical systems, which are considered already compliant or are being replaced.) Accurate reporting is critical to ensuring that executive management receives a reliable picture of the Year 2000 progress of component organizations. This is particularly important at Interior, where much of the Year 2000 program responsibility is delegated to the individual bureaus and offices. Although the department relies on its bureaus to provide monthly reports on the status of their Year 2000 renovation actions, to date it has not verified the accuracy and reliability of the reported information. As the only staff member in Interior’s Year 2000 Project Office, the department’s coordinator does not have the ability to verify the accuracy of reported information on the bureaus’ and offices’ mission-critical systems. Therefore, the Chief Information Officer requested that Interior’s Inspector General assist in monitoring the progress of the individual bureaus in achieving Year 2000 compliance. It is important to verify because if the data are inaccurate, it will be more difficult to identify and correct problems promptly. Interior regularly exchanges data with other organizations. In many instances, these data are critical to the department’s operations. In response to a recent survey we conducted, Interior reported that 40 of its 95 mission-critical systems exchange electronic data with other federal, state, and local agencies; domestic and foreign private sectors; and foreign governments. Although the bureaus have identified over 2,900 incoming and outgoing external data exchanges, the department does not have a central inventory. While it has asked each bureau and office head to certify that date-sensitive data exchanges have been identified and data exchange partners contacted to begin resolving date-format issues, the lack of a centralized inventory and an automated way to maintain it means that Interior could be missing key information showing whether exchange agreements are proceeding as scheduled. Failure to reach such agreements raises the risk that Interior’s systems will receive noncompliant data that can corrupt its databases. The risk of failure is not limited to an organization’s internal information systems, but includes the potential Year 2000 failures of others, such as business partners. One weak link in the chain of critical dependencies and even the most successful Year 2000 program will fail to protect against major disruption of business operations. Because of these risks, agencies must start business continuity and contingency planning now in order to reduce the risk of Year 2000-induced business failures. Interior has recognized, to some degree, the critical need for contingency planning, and has asked its bureaus and offices to develop such plans for all mission-critical systems that are behind schedule. However, it has not instructed its component organizations to develop plans to ensure the continuity of core business operations. As noted, agencies developing such plans only for systems currently behind schedule are not addressing the need to ensure business continuity in the event of unforeseen failures. Further, such plans should not be limited to the risks posed by Year 2000-induced failures of internal information systems. In conclusion, the change of century will initially present many difficult challenges in information technology and continuity of business operations, and has the potential to cause serious disruption to the nation and to the Department of the Interior. These risks can be mitigated and disruptions minimized with proper attention and management. While Interior has been working to mitigate its Year 2000 risks, further action must be taken to avoid losing the ability to continue mission-critical business operations. Continued congressional oversight through hearings such as this can help ensure that such attention continues and that appropriate actions are taken to address this crisis. Mr. Chairman, this concludes my statement. I would be happy to respond to any questions that you or other members of the Committee may have at this time. Year 2000 Computing Crisis: Business Continuity and Contingency Planning (GAO/AIMD-10.1.19, Exposure Draft, March 1998). Year 2000 Computing Crisis: Strong Leadership Needed to Avoid Disruption of Essential Services (GAO/T-AIMD-98-117, March 24, 1998). Year 2000 Computing Crisis: Office of Thrift Supervision’s Efforts to Ensure Thrift Systems Are Year 2000 Compliant (GAO/T-AIMD-98-102, March 18, 1998). Year 2000 Computing Crisis: Strong Leadership and Effective Public/Private Cooperation Needed to Avoid Major Disruptions (GAO/T-AIMD-98-101, March 18, 1998). Post-Hearing Questions on the Federal Deposit Insurance Corporation’s Year 2000 (Y2K) Preparedness (AIMD-98-108R, March 18, 1998). SEC Year 2000 Report: Future Reports Could Provide More Detailed Information (GAO/GGD/AIMD-98-51, March 6, 1998). Year 2000 Readiness: NRC’s Proposed Approach Regarding Nuclear Powerplants (GAO/AIMD-98-90R, March 6, 1998). Year 2000 Computing Crisis: Federal Deposit Insurance Corporation’s Efforts to Ensure Bank Systems Are Year 2000 Compliant (GAO/T-AIMD-98-73, February 10, 1998). Year 2000 Computing Crisis: FAA Must Act Quickly to Prevent Systems Failures (GAO/T-AIMD-98-63, February 4, 1998). FAA Computer Systems: Limited Progress on Year 2000 Issue Increases Risk Dramatically (GAO/AIMD-98-45, January 30, 1998). Defense Computers: Air Force Needs to Strengthen Year 2000 Oversight (GAO/AIMD-98-35, January 16, 1998). Year 2000 Computing Crisis: Actions Needed to Address Credit Union Systems’ Year 2000 Problem (GAO/AIMD-98-48, January 7, 1998). Veterans Health Administration Facility Systems: Some Progress Made In Ensuring Year 2000 Compliance, But Challenges Remain (GAO/AIMD-98-31R, November 7, 1997). Year 2000 Computing Crisis: National Credit Union Administration’s Efforts to Ensure Credit Union Systems Are Year 2000 Compliant (GAO/T-AIMD-98-20, October 22, 1997). Social Security Administration: Significant Progress Made in Year 2000 Effort, But Key Risks Remain (GAO/AIMD-98-6, October 22, 1997). Defense Computers: Technical Support Is Key to Naval Supply Year 2000 Success (GAO/AIMD-98-7R, October 21, 1997). Defense Computers: LSSC Needs to Confront Significant Year 2000 Issues (GAO/AIMD-97-149, September 26, 1997). Veterans Affairs Computer Systems: Action Underway Yet Much Work Remains To Resolve Year 2000 Crisis (GAO/T-AIMD-97-174, September 25, 1997). Year 2000 Computing Crisis: Success Depends Upon Strong Management and Structured Approach (GAO/T-AIMD-97-173, September 25, 1997). Year 2000 Computing Crisis: An Assessment Guide (GAO/AIMD-10.1.14, September 1997). Defense Computers: SSG Needs to Sustain Year 2000 Progress (GAO/AIMD-97-120R, August 19, 1997). Defense Computers: Improvements to DOD Systems Inventory Needed for Year 2000 Effort (GAO/AIMD-97-112, August 13, 1997). Defense Computers: Issues Confronting DLA in Addressing Year 2000 Problems (GAO/AIMD-97-106, August 12, 1997). Defense Computers: DFAS Faces Challenges in Solving the Year 2000 Problem (GAO/AIMD-97-117, August 11, 1997). Year 2000 Computing Crisis: Time is Running Out for Federal Agencies to Prepare for the New Millennium (GAO/T-AIMD-97-129, July 10, 1997). Veterans Benefits Computer Systems: Uninterrupted Delivery of Benefits Depends on Timely Correction of Year-2000 Problems (GAO/T-AIMD-97-114, June 26, 1997). Veterans Benefits Computers Systems: Risks of VBA’s Year-2000 Efforts (GAO/AIMD-97-79, May 30, 1997). Medicare Transaction System: Success Depends Upon Correcting Critical Managerial and Technical Weaknesses (GAO/AIMD-97-78, May 16, 1997). Medicare Transaction System: Serious Managerial and Technical Weaknesses Threaten Modernization (GAO/T-AIMD-97-91, May 16, 1997). Year 2000 Computing Crisis: Risk of Serious Disruption to Essential Government Functions Calls for Agency Action Now (GAO/T-AIMD-97-52, February 27, 1997). Year 2000 Computing Crisis: Strong Leadership Today Needed To Prevent Future Disruption of Government Services (GAO/T-AIMD-97-51, February 24, 1997). High-Risk Series: Information Management and Technology (GAO/HR-97-9, February 1997). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO discussed where the federal government stands in its efforts to lessen Year 2000 risks and GAO's preliminary observations on Year 2000 activities at the Department of the Interior. GAO noted that: (1) the federal government is extremely vulnerable to the Year 2000 issue due to its widespread dependence on computer systems; (2) its reviews of federal agency Year 2000 programs have found uneven progress, and its reports contain numerous recommendations, which the agencies have almost universally agreed to implement; (3) one of the largest, and largely unknown, risks relates to the global nature of the Year 2000 problem; (4) with electronic dependence and massive exchange of data comes increasing risk that uncorrected Year 2000 problems in other countries will adversely affect the United States; (5) setting priorities for Year 2000 conversion is essential, with the focus being on systems most critical to health and safety, financial well being, national security, or the economy; (6) agencies must start business continuity and contingency planning now to safeguard their ability to deliver a minimum acceptable level of services in the event of Year 2000-induced failures; (7) agencies must have strategies for independently verifying the status of their Year 2000 efforts; (8) no nationwide assessment, including the private and public sectors, has been undertaken of Year 2000 risks and readiness; (9) Interior estimates that correcting its 95 mission-critical systems will cost $17.3 million; (10) Interior is also assessing its communications systems and embedded chip technologies to determine whether they will be affected by the century change; and (11) Interior's Year 2000 coordinator does not have the ability to verify the accuracy of reported information on the bureaus' and offices' mission-critical systems. |
Federal law and policy have established roles and responsibilities for federal agencies to coordinate with industry in enhancing the security and resilience of critical government and industry infrastructures. According to the Homeland Security Act of 2002, as amended, DHS is to, among other things, carry out comprehensive vulnerability assessments of CI; integrate relevant information, analyses, and assessments from within DHS and from CI partners; and use the information collected to identify priorities for protective and support measures. Assessments include areas that can be assessed for vulnerability (hereinafter referred to as “areas”), such as perimeter security, the presence of a security force, or vulnerabilities to intentional acts, including acts of terrorism. Presidential Policy Directive/PPD-21 directs DHS to, among other things, provide strategic guidance, promote a national unity of effort, and coordinate the overall federal effort to promote the security and resilience of the nation’s CI. Related to PPD-21, the NIPP calls for the CI community and associated stakeholders to carry out an integrated approach to (1) identify, deter, detect, disrupt, and prepare for threats and hazards (all hazards); (2) reduce vulnerabilities of critical assets, systems, and networks; and (3) mitigate the potential consequence to CI to incidents or events that do occur. According to the NIPP, CI partners are to identify risk in a coordinated and comprehensive manner across the CI community; minimize duplication; consider interdependencies; and, as appropriate, share information within the CI community. Within DHS, NPPD is responsible for working with public and industry infrastructure partners and leads the coordinated national effort to mitigate risk to the nation’s infrastructure through the development and implementation of the infrastructure security program. NPPD’s Office of Infrastructure Protection (IP) has overall responsibility for coordinating implementation of the NIPP across the 16 CI sectors, including providing guidance to SSAs and CI owners and operators on protective measures to assist in enhancing the security of infrastructure and helping CI sector partners develop the capabilities to mitigate vulnerabilities and identifiable risks to the assets. The NIPP also designates other federal agencies, as well as some offices and components within DHS, as SSAs that are responsible for, among other things, coordinating with DHS and other federal departments and agencies and CI owners and operators to identify vulnerabilities, and to help mitigate incidents, as appropriate. DHS offices and components or asset owners and operators have used various assessment tools and methods, some of which are voluntary, while others are required by law or regulation, to gather information about certain aspects of CI. For example, Protective Security Coordination Division (PSCD), within NPPD, relies on Protective Security Advisors (PSA) to offer and conduct voluntary vulnerability assessments to owners and operators of CI to help identify potential security actions; Infrastructure Security Compliance Division, within NPPD, requires regulated chemical facilities to complete a security vulnerability assessment pursuant to CFATS;TSA conducts various assessments of airports, pipelines, and rail and transit systems; and Coast Guard requires facilities it regulates under the Maritime Transportation Security Act of 2002 (MTSA) to complete assessments as part of their security planning process. In addition, SSAs external to DHS also offer vulnerability assessment tools and methods to owners or operators of CI and these assessments include areas such as resilience management or perimeter security. For example, the Environmental Protection Agency, the SSA for the water sector, provides a self-assessment tool for the conduct of voluntary security-related assessments at water and wastewater facilities. DHS’s took steps to address barriers to conducting critical infrastructure vulnerability assessments and sharing information, in response to findings from our previous work. Specifically, DHS has made progress in the following areas: Determining why some industry partners do not participate in voluntary assessments. DHS supports the development of the national risk picture by conducting vulnerability assessments and security surveys to identify security gaps and potential vulnerabilities in the nation’s high- priority critical infrastructure. In a May 2012 report, we assessed the extent to which DHS had taken action to conduct security surveys using its Infrastructure Survey Tool (IST) and vulnerability assessments among high-priority infrastructure, shared the results of these surveys and assessments with asset owners or operators, and assessed their effectiveness. We found that various factors influence whether industry owners and operators of assets participate in these voluntary programs, but that DHS did not systematically collect data on reasons why some owners and operators of high-priority assets declined to participate in security surveys or vulnerability assessments. We concluded that collecting data on the reason for declinations could help DHS take steps to enhance the overall protection and resilience of those high-priority critical infrastructure assets crucial to national security, public health and safety, and the economy. We recommended, and DHS concurred, that DHS design and implement a mechanism for systematically assessing why owners and operators of high-priority assets decline to participate. In response to our recommendations, in October 2013 DHS developed and implemented a tracking system to capture and account for declinations. In addition, in August 2014 DHS established a policy to conduct quarterly reviews to, among other things, track these and other survey and assessment programs and identify gaps and requirements for priorities and help DHS better understand what barriers owners and operators of critical infrastructure face in making improvements to the security of their assets. Sharing of assessment results at the asset level in a timely manner. DHS security surveys and vulnerability assessments can provide valuable insights into the strengths and weaknesses of assets and can help asset owners and operators that participate in these programs make decisions about investments to enhance security and resilience. In our May 2012 report, we found that, among other things, DHS shared the results of security surveys and vulnerability assessments with asset owners or operators. However, we also found that the usefulness of security survey and vulnerability assessment results could be enhanced by the timely delivery of these products to the owners and operators. We reported that the inability to deliver these products in a timely manner could undermine the relationship DHS was attempting to develop with these industry partners. Specifically, we reported that, based on DHS data from fiscal year 2011, DHS was late meeting the 30-day time frame for delivering the results of its security surveys required by DHS guidance 60 percent of the time. DHS officials acknowledged the late delivery of survey and assessment results and said they were working to improve processes and protocols. However, DHS had not established a plan with time frames and milestones for managing this effort. We recommended, and DHS concurred, that it develop time frames and specific milestones for managing its efforts to ensure the timely delivery of the results of security surveys and vulnerability assessments to asset owners and operators. In response to our recommendation, DHS established timeframes and milestones to ensure the timely delivery of assessment results of the surveys and assessments to CI owners and operators. In addition, in February 2013, DHS transitioned to a web-based delivery system, which, according to DHS, has since resulted in a significant drop in overdue deliveries. Sharing certain information with critical infrastructure partners at the regional level. Our work has shown that over the past several years, DHS has recognized the importance of and taken actions to examine critical infrastructure asset vulnerabilities, threats, and potential consequences across regions. In a July 2013 report, we examined DHS’s management of its Regional Resiliency Assessment Program (RRAP)—a voluntary program intended to assess regional resilience of critical infrastructure by analyzing a region’s ability to adapt to changing conditions, and prepare for, withstand, and rapidly recover from disruptions—and found that DHS has been working with states to improve the process for conducting RRAP projects, including more clearly defining the scope of these projects. We also reported that DHS shares the project results of each RRAP project report, including vulnerabilities identified, with the primary stakeholders—officials representing the state where the RRAP was conducted—and that each report is generally available to SSAs and protective security advisors within DHS. Sharing information with sector-specific agencies and state and local governments. Federal SSAs and state and local governments are key partners that can provide specific expertise and perspectives in federal efforts to identify and protect critical infrastructure. In a March 2013 report, we reviewed DHS’s management of the National Critical Infrastructure Prioritization Program (NCIPP), and how DHS worked with states and SSAs to develop the high-priority CI list. The program identifies a list of nationally significant critical infrastructure each year that is used to, among other things, prioritize voluntary vulnerability assessments conducted by PSAs on high-priority critical infrastructure. We reported that DHS had taken actions to improve its outreach to SSAs and states in an effort to address challenges associated with providing input on nominations and changes to the NCIPP list. However, we also found that most state officials we contacted continued to experience challenges with nominating assets to the NCIPP list using the consequence-based criteria developed by DHS. Among other actions, we recommended that DHS commission an independent, external peer review of the NCIPP with clear project objectives. In November 2013, DHS commissioned a panel that reviewed the NCIPP process, guidance documentation, and process phases to provide an evaluation of the extent to which the process is comprehensive, reproducible, and defensible. The panel made 24 observations about the NCIPP; however, panel members expressed different views regarding the classification of the NCIPP list, and views on whether private sector owners of the assets, systems, and clusters should be notified of inclusion on the list. As of August 2014, DHS officials reported that they are exploring options to streamline the process and limit the delay of dissemination among those who have a need-to-know. Our previous work identified a need for DHS vulnerability assessment guidance and coordination. Specifically, we found: Establishing guidance for areas of vulnerability covered by assessments. In a September 2014 report examining, among other things, the extent to which DHS is positioned to integrate vulnerability assessments to identify priorities, we found that the vulnerability assessment tools and methods DHS offices and components use vary with respect to the areas assessed depending on which DHS office or component conducts or requires the assessment. As a result, it was not clear what areas DHS believes should be included in a comprehensive vulnerability assessment. Moreover, we found that DHS had not issued guidance to ensure that the areas it deems most important are captured in assessments conducted or required by its offices and components. Our analysis of 10 vulnerability assessment tools and methods showed that DHS vulnerability assessments consistently included some areas that were assessed for vulnerability but included other areas that were not consistently assessed. Our analysis showed that all 10 of the DHS assessment tools and methods we analyzed included areas such as “vulnerabilities from intentional acts”—such as terrorism—and “perimeter security” in the assessment. However, 8 of the 10 assessment tools and methods did not include areas such as “vulnerabilities to all hazards” such as hurricanes or earthquakes while the other 2 did. These differences in areas assessed among the various assessment tools and methods could complicate or hinder DHS’s ability to integrate relevant assessments in order to identify priorities for protective and support measures. We found that the assessments conducted or required by DHS offices and components also varied greatly in their length and the detail of information to be collected. For example, within NPPD, PSCD used its IST to assess high-priority facilities that voluntarily participate and this tool was used across the spectrum of CI sectors. The IST, which contains more than 100 questions and 1,500 variables, is used to gather information on the security posture of CI, and the results of the IST can inform owners and operators of potential vulnerabilities facing their asset or system. In another example from NPPD, ISCD required owners and operators of facilities that possess, store, or manufacture certain chemicals under CFATS to provide data on their facilities using an online tool so that ISCD can assess the risk posed by covered facilities. This tool, ISCD’s Chemical Security Assessment Tool Security Vulnerability Assessment contained more than 100 questions based on how owners respond to an initial set of questions. Within DHS, TSA’s Office of Security Operations offered or conducted a number of assessments, such as a 205-question assessment of transit systems called the Baseline Assessment for Security Enhancements that contained areas to be assessed for vulnerability, and TSA’s 17-question Freight Rail Risk Analysis Tool was used to assess rail bridges. In addition to differences in what areas were included, there were also differences in the detail of information collected for individual areas, making it difficult to determine the extent to which the information collected was comparable and what assumptions and/or judgments were used while gathering assessment data. We also observed that components used different questions for the same areas assessed. These variations, among others we identified, could impede DHS’s ability to integrate relevant information and use it to identify priorities for protective and support measures regarding terrorist and other threats to homeland security. For example, we found that while some components asked open-ended questions such as “describe security personnel,” others included drop-down menus or lists of responses to be selected. We recommended that DHS review its vulnerability assessments to identify the most important areas to be assessed, and determine the areas and level of detail that are necessary to integrate assessments and enable comparisons, and establish guidance, among other things. DHS agreed with our recommendation, and established a working group in August 2015 to address this recommendation and others we made. As of March 2016 these efforts are ongoing and DHS intends to provide an update in the summer of 2016. Establishing guidance on common data standards to help reduce assessment fatigue and improve information sharing. As we reported in September 2014, federal assessment fatigue could impede DHS’s ability to garner the participation of CI owners and operators in its voluntary assessment activities. During our review of vulnerability assessments, the Coast Guard, PSCD, and TSA field personnel we contacted reported observing what they called federal fatigue, or a perceived weariness among CI owners and operators who had been repeatedly approached or required by multiple federal agencies and DHS offices and components to participate in or complete assessments. One official who handles security issues for an association representing owners and operators of CI expressed concerns at the time about his members’ level of fatigue. Specifically, he shared observations that DHS offices and components do not appear to effectively coordinate with one another on assessment-related activities to share or use information and data that have already been gathered by one of them. The official also noted that, from the association’s perspective, the requests and invitations to participate in assessments have exceeded what is necessary to develop relevant and useful information, and information is being collected in a way that is not the best use of the owners’ and operators’ time. As figure 1 illustrates, depending on a given asset or facility’s operations, infrastructure, and location, an owner or operator could be asked or required to participate in multiple separate vulnerability assessments. DHS officials expressed concern at the time that this “fatigue” may diminish future cooperation from asset owners and operators. We recommended in September 2014 that DHS develop an approach for consistently collecting and maintaining data from assessments conducted across DHS to facilitate the identification of potential duplication and gaps in coverage. Having common data standards would better position DHS offices and components to minimize the aforementioned fatigue, and the resulting declines in CI owner and operator participation, by making it easier for DHS offices and components to use each other’s data to determine what CI assets or facilities may have been already visited or assessed by another office or component. They could then plan their assessment efforts and outreach accordingly to minimize the potential for making multiple visits to the same assets or facilities. DHS agreed with our recommendation, and as of March 2016 DHS had established a working group to address the recommendations from our report and planned to provide us with a status update in the summer of 2016. Addressing the potential for duplication, overlap, or gaps between and among the various efforts. As with the sharing of common assessment data, we found in our 2014 review of vulnerability assessments that DHS also lacks a department-wide process to facilitate coordination among the various offices and components that conduct vulnerability assessments or require assessments on the part of owners and operators. This could hinder the ability to identify gaps or potential duplication in DHS assessments. For example, among 10 different types of DHS vulnerability assessments we compared, we found that DHS assessment activities were overlapping across some of the sectors, but not others. Given the overlap of DHS’s assessments among many of the 16 sectors, we attempted to compare data to determine whether DHS had conducted or required vulnerability assessments at the same critical infrastructure within those sectors. However, we were unable to conduct this comparison because of differences in the way data about these activities were captured and maintained. Officials representing DHS acknowledged at the time they encountered challenges with the consistency of assessment data and stated that DHS-wide interoperability standards did not exist for them to follow in recording their assessment activities that would facilitate consistency and enable comparisons among the different data sets. The NIPP calls for standardized processes to promote integration and coordination of information sharing through, among other things, jointly developed standard operating procedures. However, DHS officials stated at the time that they generally relied on field-based personnel to inform their counterparts at other offices and components about planned assessment activities and share information as needed on what assets may have already been assessed. For example, PSAs may inform and invite CI partners to participate in these assessments, if the owner and operator of the asset agrees. PSAs may also alert their DHS counterparts depending on assets covered and their areas of responsibility. However, we found that absent these field-based coordination or sharing activities, it was unclear whether all facilities in a particular geographic area or sector were covered. For example, after CFATS took effect, in 2007, ISCD officials asked PSCD to stop having PSAs conduct voluntary assessments at CFATS-regulated chemical facilities to reduce potential confusion about DHS authority over chemical facility security and to avoid overlapping assessments. In response, PSCD reduced the number of voluntary vulnerability assessments conducted in the chemical sector. However, one former ISCD official noted that without direct and continuous coordination between PSCD and ISCD on what facilities are being assessed or regulated by each division, this could create a gap in assessment coverage between CFATS-regulated facilities and facilities that could have participated in PSCD assessments given that the number of CFATS-regulated facilities can fluctuate over time. Without processes for DHS offices and components to share data and coordinate with each other in their CI vulnerability assessment activities, DHS cannot provide reasonable assurance that it can identify potential duplication, overlap, or gaps in coverage that could ultimately affect DHS’s ability to work with its partners to enhance national CI security and resilience, consistent with the NIPP. We recommended in September 2014 that DHS develop an approach to ensure that vulnerability data gathered on CI be consistently collected and maintained across DHS to facilitate the identification of potential duplication and gaps in CI coverage. As of March 2016, DHS has begun a process of identifying the appropriate level of guidance to eliminate gaps or duplication in methods and to coordinate vulnerability assessments throughout the department. We also recommended that DHS identify key CI security-related assessment tools and methods used or offered by SSAs and other federal agencies, analyze them to determine the areas of vulnerability they capture, and develop and provide guidance for what areas should be included in vulnerability assessments of CI that can be used by DHS and other CI partners in an integrated and coordinated manner. DHS concurred with our recommendations and stated that it planned to take a variety of actions to address the issues we identified, including conducting an inventory survey of the security-related assessment tools and methods used by SSAs to address CI vulnerabilities. As of March 2016, DHS has established a working group, consisting of members from multiple departments and agencies, to enhance the integration and coordination of vulnerability assessment efforts. These efforts are ongoing and we will continue to monitor DHS’s progress in implementing these recommendations. In addition to efforts to address our recommendations, DHS is in the process of reorganizing NPPD to ensure that it is appropriately positioned to carry out its critical mission of cyber and infrastructure security. Key priorities of this effort are to include greater unity of effort across the organization and enhanced operational activity to leverage the expertise, skills, information, and relationships throughout DHS. The NPPD reorganization presents DHS with an opportunity to engage stakeholders in decision-making and may achieve greater efficiency or effectiveness by reducing programmatic duplication, overlap, and fragmentation. It also presents DHS with an opportunity to mitigate potential duplication or gaps by consistently capturing and maintaining data from overlapping vulnerability assessments of CI and improving data sharing and coordination among the offices and components involved with these assessments. Chairman Ratcliffe, Ranking Member Richmond, and members of the sub-committee, this completes my prepared statement. I would be happy to respond to any questions you may have at this time. If you or your staff members have any questions about this testimony, please contact me at (404) 679-1875 or curriec@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Other individuals making key contributions to this work include Ben Atwater, Assistant Director; Andrew Curry, Analyst-in-Charge; and Peter Haderlein. This appendix provides information on the 16 critical infrastructure (CI) sectors and the federal agencies responsible for sector security. The National Infrastructure Protection Plan (NIPP) outlines the roles and responsibilities of the Department of Homeland Security (DHS) and its partners—including other federal agencies. Within the NIPP framework, DHS is responsible for leading and coordinating the overall national effort to enhance security via 16 critical infrastructure sectors. Consistent with the NIPP, Presidential Decision Directive/PPD-21 assigned responsibility for the critical infrastructure sectors to sector-specific agencies (SSAs). As an SSA, DHS has direct responsibility for leading, integrating, and coordinating efforts of sector partners to protect 10 of the 16 critical infrastructure sectors. Seven other federal agencies have sole or coordinated responsibility for the remaining 6 sectors. Table 1 lists the SSAs and their sectors. | Protecting the security of CI is a top priority for the nation. CI includes assets and systems, whether physical or cyber, that are so vital to the United States that their destruction would have a debilitating impact on, among other things, national security or the economy. Multiple federal entities, including DHS, are involved in assessing CI vulnerabilities, and assessment fatigue could impede DHS's ability to garner the participation of CI owners and operators in its voluntary assessment activities. This testimony summarizes past GAO findings on progress made and improvements needed in DHS's vulnerability assessments, such as addressing potential duplication and gaps in these efforts. This statement is based on products GAO issued from May 2012 through October 2015 and recommendation follow-up conducted through March 2016. GAO reviewed applicable laws, regulations, directives, and policies from selected programs. GAO interviewed officials responsible for administering these programs and assessed related data. GAO interviewed and surveyed a range of stakeholders, including federal officials, and CI owners and operators. GAO's prior work has shown the Department of Homeland Security (DHS) has made progress in addressing barriers to conducting voluntary assessments but guidance is needed for DHS's critical infrastructure (CI) vulnerability assessments activities and to address potential duplication and gaps. For example: Determining why some industry partners do not participate in voluntary assessments . In May 2012, GAO reported that various factors influence whether CI owners and operators participate in voluntary assessments that DHS uses to identify security gaps and potential vulnerabilities, but that DHS did not systematically collect data on reasons why some owners and operators of high-priority CI declined to participate. GAO concluded that collecting data on the reason for declinations could help DHS take steps to enhance the overall security and resilience of high-priority CI crucial to national security, public health and safety, and the economy, and made a recommendation to that effect. DHS concurred and has taken steps to address the recommendation, including developing a tracking system in October 2013 to capture declinations. Establishing guidance for areas of vulnerability covered by assessments. In September 2014, GAO reported that the vulnerability assessment tools and methods DHS offices and components use vary with respect to the areas of vulnerability—such as perimeter security—assessed depending on which DHS office or component conducts or requires the assessment. As a result it was not clear what areas DHS believes should be included in its assessments. GAO recommended that DHS review its vulnerability assessments to identify the most important areas of vulnerability to be assessed, and establish guidance, among other things. DHS agreed and established a working group in August 2015 to address this recommendation. As of March 2016 these efforts were ongoing with a status update expected in the summer of 2016. Addressing the potential for duplication, overlap, or gaps between and among the various efforts . In September 2014, GAO found overlapping assessment activities and reported that DHS lacks a department-wide process to facilitate coordination among the various offices and components that conduct vulnerability assessments or require assessments on the part of owners and operators. This could hinder the ability to identify gaps or potential duplication in DHS assessments. GAO identified opportunities for DHS to coordinate with other federal partners to share information regarding assessments. In response to GAO recommendations, DHS began a process of identifying the appropriate level of guidance to eliminate gaps or duplication in methods and to coordinate vulnerability assessments throughout the department. GAO also recommended that DHS identify key CI security-related assessment tools and methods used or offered by other federal agencies, analyze them to determine the areas they capture, and develop and provide guidance for what areas should be included in vulnerability assessments of CI that can be used by DHS and other CI partners in an integrated and coordinated manner. DHS agreed, and as of March 2016, established a working group to address GAO recommendations. GAO made recommendations to DHS in prior reports to strengthen its assessment efforts. DHS agreed with these recommendations and reported actions or plans to address them. GAO will continue to monitor DHS efforts to address these recommendations. |
We found weak or nonexistent controls in the process that FEMA used to review disaster registrations and approve assistance payments that leave the federal government vulnerable to fraud and abuse. In the critical aftermath of hurricanes Katrina and Rita, FEMA moved swiftly to distribute expedited assistance payments to allow disaster victims to mitigate and overcome the effects of the disasters. In this context, the establishment of an effective control environment was a significant challenge. Specifically, we found that FEMA had implemented some controls prior to the disaster to provide automated validation of the identity of registrants who applied for assistance via the Internet. Our work thus far indicates that this resulted in FEMA rejecting some registrants who provided names and SSNs that did not pass the validation test. However, FEMA did not implement the same preventive controls for those who applied via the telephone. Our use of fictitious names, bogus addresses, and fabricated disaster stories to obtain expedited assistance payments from FEMA demonstrated the ease with which expedited assistance could be obtained by providing false information over the telephone. Because expedited assistance is a gateway to further IHP payments (up to $26,200 per registration), approval for expedited assistance payments potentially exposes FEMA, and the federal government, to more fraud and abuse related to temporary housing, home repair and replacement, and other needs assistance. During the course of our audit and investigation, FEMA officials stated that they did not verify whether registrants had insurance and whether registrants were unable to live in their home prior to approving expedited assistance payments. According to FEMA officials, the unprecedented scale of the two disasters and the need to move quickly to mitigate their impact led FEMA to implement expedited assistance. Expedited assistance differs from the traditional way of delivering disaster assistance in that it calls for FEMA to provide assistance without requiring proof of losses and verifying the extent of such losses. Consequently, FEMA implemented limited controls to verify eligibility for the initial expedited assistance payments. According to FEMA officials, these controls were restricted to determining whether the damaged residence was in the disaster area and limited validation of the identity of registrants who used the Internet. Registrants who FEMA thought met these qualifications based on their limited assessments were deemed eligible for expedited assistance. FEMA implemented different procedures when processing disaster registrations submitted via the Internet and telephone calls. Of the more than 2.5 million registrations recorded in FEMA’s database, i.e., registrations that were successfully recorded—60 percent (more than 1.5 million) were exempt from any identity verification because they were submitted via the telephone. Prior to sending out expedited assistance payments, FEMA did not have procedures in place for Internet or telephone registrations that screened out registrations where the alleged damaged address was a bogus address. The lack of identity verification for telephone registrations and any address validation exposed the government to fraud and abuse of the IHP program. For registrations taken through FEMA’s Web site, registrants were required to first provide a name, SSN, and date of birth. This information was immediately provided (in electronic format) to a FEMA contractor to compare against existing publicly available records. While registrants were waiting on the Internet, the FEMA contractor took steps to verify registrants’ identities. The verification steps involved confirming that the SSN matched with a SSN in public records, that the name and SSN combination matched with an identity registered in public records, and that the SSN was not associated with a deceased individual. The FEMA contractor was responsible for blocking any registrations for which any of these three conditions was not met. Additionally, registrants who passed the first gate had to provide answers to a number of questions aimed at further corroborating the registrants’ identities. Registrants who were rejected via the Internet were advised to contact FEMA via telephone. Our audit and investigative work indicated that this verification process helped deter obviously fraudulent Internet registrations using false names and SSNs. However, FEMA kept no record of the names, SSNs, and other information related to the rejected registrations, and no record of the reasons that the FEMA contractor blocked the registration from going forward. FEMA acknowledged that it was conceivable that individuals who were rejected because of false information submitted via the Internet could get expedited assistance payments by providing the same false information over the telephone. Although the identity verification process appeared to have worked for most Internet registrations, it did not identify a small number of registrations with invalid SSNs. According to information we received from the SSA, nearly 60 Internet registrants who received FEMA payments provided SSNs that were never issued or belonged to individuals who were deceased prior to the hurricanes. Results indicate that these individuals may have passed the verification process because public records used to verify registrants’ identities were flawed. For example, one credit history we obtained indicated that a registrant had established a credit history using an invalid SSN. Unlike the Internet process, FEMA did not verify the identity of telephone registrants who accounted for over 60 percent of disaster registrations recorded in FEMA’s system. For registrants who registered only via telephone, or registrants who called FEMA subsequent to being denied on the Internet, FEMA did not have controls in place to verify that the SSN had been issued, that the SSN matched with the name, that the SSN did not belong to a deceased individual, or whether the registrants had been rejected on prior Internet registrations. Because the identity of telephone registrants was not subjected to basic verification, FEMA did not have any independent assurance that registrants did not falsify information to obtain disaster assistance. According to FEMA officials, FEMA had a request in place to modify its computer system to allow for identity verification for telephone registrations similar to those used for the Internet. FEMA also represented to us that due to budget constraints and other considerations, the change was not implemented in time to respond to hurricanes Katrina and Rita. However, to date we have not received documentation to validate these representations. The lack of identity verification of phone registrants prior to disbursing funds makes FEMA vulnerable to authorizing expedited assistance payments based on fraudulent information submitted by registrants. Prior to obtaining information on the control procedures FEMA used to authorize expedited assistance payments, we tested the controls by attempting to register for disaster relief through two portals: (1) the Internet via FEMA’s Web site and (2) telephone calls to FEMA. For both portals, we tested FEMA’s controls by providing falsified identities and bogus addresses. In all instances, FEMA’s Web site did not allow us to successfully finalize our registrations. Instead, the Web site indicated that there were problems with our registrations and advised us to contact the FEMA toll-free numbers if we thought that we were eligible for assistance. This is consistent with FEMA’s representation that Internet registrations were compared against third-party information to verify identities. Our investigative work also confirmed that the lack of similar controls over telephone registrations exposed FEMA to fraud and abuse. Specifically, in instances where we submitted via the telephone the same exact information that had been rejected on the Internet, i.e., falsified identities and bogus addresses, the information was accepted as valid. Subsequently, the claims were processed and $2,000 expedited assistance checks were issued. Figure 1 provides an example of an expedited assistance check provided to GAO. Additional case study investigations, which we discuss later, further demonstrated that individuals not affected by the disasters could easily provide false information to obtain expedited assistance and other IHP payments from FEMA. Convictions obtained by the Department of Justice also show that others have exploited these control weaknesses and received expedited assistance payments. For example, one individual in a College Station, Texas relief center pleaded guilty to false claims and mail fraud charges related to IHP and expedited assistance. Despite never having lived in any of the areas affected by the hurricane, this individual registered for and received $4,358 ($2,000 in expedited assistance and $2,358 in rental assistance) in hurricane Katrina IHP payments. We also found that FEMA instituted limited pre-payment checks in the National Emergency Management Information System (NEMIS) to automate the identification of duplicate registrations. However, the subsequent review process used to resolve these duplicate registrations was not effective in preventing duplicate and potentially fraudulent payments. We also found that FEMA did not implement procedures to provide assurance that the disaster address was not a bogus address, either for Internet or telephone registrations. FEMA’s controls failed to prevent thousands of registrations with duplicate information from being processed and paid. Our work indicates that FEMA instituted limited automated checks within NEMIS to identify registrations containing duplicate information, e.g., multiple registrations with the same SSNs, duplicate damaged address telephone numbers, and duplicate bank routing numbers. Data FEMA provided enabled us to confirm that NEMIS identified nearly 900,000 registrations—out of 2.5 million total registrations—as potential duplicates. FEMA officials further represented to us that the registrations identified as duplicates by the system were “frozen” from further payments until additional reviews could be conducted. The purpose of the additional reviews was to determine whether the registrations were true duplicates, and therefore payments should continue to be denied, or whether indications existed that the registrations were not true duplicates, and therefore FEMA should make those payments. It appeared from FEMA data that the automated checks and the subsequent review process prevented hundreds of thousands of payments from being made on duplicate registrations. However, FEMA data and our case study investigations also indicate that the additional review process was not entirely effective because it allowed payments based on duplicate information. We also found that FEMA did not implement effective controls for telephone and Internet registrations to verify that the address claimed by registrants as their damaged address existed. As will be discussed further below, many of our case studies of potential fraud show that payments were received based on claims made listing bogus damaged addresses. Our undercover work also corroborated that FEMA provided expedited assistance to registrants with bogus addresses. With limited or nonexistent validation of registrants’ identities and the reported damaged addresses, it is not surprising that our data mining and investigations found substantial indicators of potential fraud and abuse related to false or duplicate information submitted on disaster registrations. Our audits and investigations of 20 cases studies comprising 248 registrations that received payments, and the undercover work we discussed earlier, clearly showed that individuals can obtain hundreds of thousands of dollars of IHP payments based on fraudulent and duplicate information. These case studies are not isolated instances of fraud and abuse. Rather, our data mining results to date indicate that they are illustrative of the wider internal control weaknesses at FEMA—control weaknesses that led to thousands of payments made to individuals who provided FEMA with incorrect information, e.g., incorrect SSNs and bogus addresses, and thousands more made to individuals who submitted multiple registrations for payments. Our audits and investigations of 20 case studies demonstrate that the weak or nonexistent controls over the registration and payment processes have opened the door to improper payments and individuals seeking to obtain IHP payments through fraudulent means. Specifically, a majority of our case study registrations—165 of 248—contained SSNs that were never issued or belonged to deceased or other individuals. About 20 of the 248 registrations we reviewed were submitted via the Internet. Further, of the over 200 alleged damaged addresses that we tried to visit, about 80 did not exist. Some were vacant lots, others turned out to be bogus apartment buildings and units. Because the hurricanes had destroyed many homes, we were unable to confirm whether about 15 additional addresses had ever existed. We also identified other fraud schemes unrelated to the weak and nonexistent validation and prepayment controls previously discussed, such as registrants who submitted registrations using valid addresses that were not their residences. In total, the case study registrants of whom we conducted investigations have collected hundreds of thousands of dollars in payments based on potentially fraudulent activities. These payments include money for expedited assistance, rental assistance, and other IHP payments. Further, as our work progresses, we are uncovering evidence of larger schemes involving multiple registrants that are intended to defraud FEMA. We found these schemes because the registrants shared the same last names, current addresses, and/or damaged addresses—some of which we were able to confirm did not exist. While the facts surrounding the case studies provided us with indicators that potential fraud may have been perpetrated, further testing and investigations need to be conducted to determine whether these individuals were intentionally trying to defraud the government or whether the discrepancies and inaccuracies were the results of other errors. Consequently, we are conducting further investigations into these case studies. Table 1 highlights 10 of the 20 case studies we identified through data mining that we investigated. In addition, some individuals in the cases cited below submitted additional registrations but had not received payments as of mid December 2005. The following provides illustrative detailed information on several of the cases. Case number 1 involves 17 individuals, several of whom had the same last name, who submitted at least 36 registrations claiming to be disaster victims of both Katrina and Rita. All 36 registrations were submitted through the telephone, using 36 different SSNs and 4 different current addresses. These individuals used their own SSNs on 2 of the registrations, but the remaining 34 SSNs were never issued or belonged to deceased or other individuals. The individuals received over $103,000 in IHP payments, including $62,000 in expedited payments and $41,000 in payments for other assistance, including temporary housing assistance. Our analysis shows that the individuals claimed 13 different damaged addresses within a single apartment building, and 4 other addresses within the same block in Louisiana. However, our physical inspection of these addresses revealed that 10 of the addresses were bogus addresses. Further audit and investigative work also shows that these individuals may not have lived at any of the valid disaster addresses at the time of hurricanes Katrina and Rita. We are conducting additional investigations on this case. Case number 2 involves an individual who used 15 different SSNs—one of which was the individual’s own—to submit at least 15 registrations over the telephone. The individual claimed a different damaged address on all 15 registrations, and used 3 different current addresses—including a post office box, where the individual received payments. The individual received 16 payments totaling over $41,000 on 15 of the registrations. In all, the individual received 13 expedited assistance payments, 2 temporary housing assistance payments, and another payment of $10,500. Further investigative work disclosed that the individual may have committed bank fraud by using a false SSN to open a bank account. Other publicly available records indicate that the individual had used 2 SSNs that were issued to other people to establish credit histories. Case number 3 relates to a group of 8 registrations that resulted in 8 payments totaling $16,000. According to FEMA data, an individual registered for Rita disaster assistance at the end of September 2005. About 10 days later, the same individual submitted at least 7 additional registrations claiming 7 different disaster addresses, 2 of which we were able to confirm belonged to the individual and may be rental properties that the individual owns. However, because the FEMA database showed that these addresses were entered as the individual’s primary residence—a primary requirement for IHP—the individual received 8 expedited assistance payments instead of just the one that he may have qualified for. We also found that the automated edits established in NEMIS identified these registrations as potential duplicates. In spite of the edit flags, FEMA cleared the registrations for improper expedited assistance payments. Case number 4 involves 2 individuals who appear to be living together at the same current address in Texas. These 2 individuals received payments for 23 registrations submitted over the telephone using 23 different SSNs— two of which belonged to them—to obtain more than $46,000 in disaster assistance. The information the registrants provided related to many of the disaster addresses appeared false. The addresses either did not exist, or there was no proof the individuals had ever lived at these addresses. Case number 8 relates to 6 registrants with the same last name who registered for disaster assistance using the same damaged address, with 5 of the 6 using the same current address. FEMA criteria specify that individuals who reside together at the same address and who are displaced to the same address are entitled to only one expedited assistance payment. However, all 6 possible family members received 12 payments totaling over $23,000—$10,000 in expedited assistance and more than $13,000 in other assistance, including rental assistance. The case studies we identified and reported are not isolated instances of potential fraud and abuse. Rather, our data mining results show that they are indicative of fraud and abuse beyond these case studies, and point directly to the weaknesses in controls that we have identified. The weaknesses identified through data mining include ineffective controls to detect (1) SSNs that were never issued or belonged to deceased or other individuals, (2) SSNs used more than once, and (3) other duplicate information. Our data mining and case studies clearly show that FEMA’s controls over IHP registrations provided little assurance that registrants provided FEMA with a valid SSN. Under 42 U.S.C. § 408, submitting a false SSN with the intent to deceive in order to obtain a federal benefit or other payment is a felony offense. Based on data provided by the SSA, FEMA made expedited assistance payments to thousands of registrants who provided SSNs that were never issued or belonged to deceased individuals. Further, SSA officials who assisted GAO in analyzing FEMA’s registrant data informed us that tens of thousands more provided SSNs that belonged to other individuals. This problem is clearly illustrated in case 2, where FEMA made payments totaling over $41,000 to an individual using 15 different SSNs. According to SSA records, the individual received payments on 4 SSNs that belonged to deceased individuals and 10 SSNs that did not match with the names provided on the registrations. As previously discussed, further testing and investigations need to be conducted to determine whether this individual was intentionally trying to defraud the government or whether the discrepancies and inaccuracies were the results of other errors. Our data mining and case studies clearly show that FEMA’s controls do not prevent individuals from making multiple IHP registrations using the same SSN. We found thousands of SSNs that were used on more than one registration associated with the same disaster. Because an individual can receive disaster relief only on his or her primary residence and a SSN is a unique number assigned to an individual, the same SSN should not be used to receive assistance for the same disaster. This problem is illustrated in case 3 above, where an individual registered for IHP 8 times using the same name, same SSN, and same current address—and thus could have qualified for only 1 expedited assistance payments—but instead received expedited assistance payments of $2,000 for 8 different registrations. Our data mining and case studies also show that the IHP controls to prevent duplicate payments did not prevent FEMA from making payments to tens of thousands of different registrants who used the same key registration information. FEMA’s eligibility criteria specify that individuals who reside together at the same address and who are displaced to the same address are typically entitled to only one expedited assistance payment. FEMA policy also provides for expedited assistance payments to more than one member of the household in unusual circumstances, such as when a household was displaced to different locations. However, both our investigations and data mining found thousands of instances where FEMA made more than one payment to the same household that shared the same last name and damaged and current addresses. As illustrated in case 8, 5 of 6 individuals with the same last name, the same damaged address, and the same current address received multiple expedited assistance payments, instead of just one for which they qualified. While not all of the registrations that used the same key information were submitted fraudulently, additional investigations need to be conducted to determine whether or not the entire family was entitled to expedited and other IHP assistance. Similarly, our data mining also determined that FEMA made payments to tens of thousands of IHP registrants who provided different damaged addresses but the same exact current address. As shown in case study 4 above, some registrations that fell into this category contained bogus addresses or addresses that were not the registrants’ residences. Under 18 U.S.C. § 1001, a person who knowingly and willfully makes any materially false, fictitious, or fraudulent statement or representation shall be fined or imprisoned up to 5 years, or both. Our data mining also found that FEMA made duplicate expedited assistance payments to tens of thousands of individuals for the same FEMA registration number. FEMA policy states that registrants should only receive one expedited assistance payment. However, in some cases, FEMA paid as many as four $2,000 expedited assistance payments to the same FEMA registration number. As discussed later, we also found that FEMA issued expedited assistance payments to more than 5,000 registrants who had already received debit cards. FEMA officials represented to us that they traced some of these obviously duplicate payments to a computer error that inadvertently caused the duplicate payments. However, they provided no supporting documentation. In the days following hurricane Katrina, FEMA experimented with the use of debit cards to expedite payments of $2,000 to about 11,000 disaster victims at three Texas shelters who, according to FEMA, had difficulties accessing their bank accounts. Figure 2 is an example of a FEMA debit card. The debit card program was an effective means of distributing relief quickly to those most in need. However, we found that because FEMA did not validate the identity of debit card recipients who registered over the telephone, some individuals who supplied FEMA with SSNs that did not belong to them also received debit cards. We also found that controls over the debit card program were not effectively designed and implemented to prevent debit card recipients from receiving duplicate expedited assistance payments, once through the debit card and again through check or EFT. Finally, unlike the guidance provided to other IHP registrants, at the time FEMA distributed the debit cards, FEMA did not provide instructions informing them that the funds on their cards must be used for appropriate purposes. As discussed previously, FEMA did not verify the identity of individuals and/or households who submitted disaster registrations over the telephone. This weakness occurred in the debit card program as well. FEMA required the completion of a disaster registration prior to a household or individual being able to receive a debit card. According to FEMA officials, registrants at the three centers applied for assistance via the telephone and Internet. Therefore, to the extent that registrations for the debit card were taken over the telephone, FEMA did not subject the identity of the registrants to a verification process. Consequently, we identified 50 debit cards issued to registrants listing SSNs that the SSA had no record of issuing, and 12 cards issued to registrants using SSNs belonging to deceased individuals. For example, one registrant used an invalid SSN to receive a $2,000 debit card and used about $500 of that money to pay prior traffic violations to reinstate a driver’s license. In another case, a registrant used the SSN of an individual who died in 1995 to receive a $2,000 debit card. FEMA subsequently deposited an additional $7,554 in IHP payments to that debit card account for additional claims submitted by that individual. This registrant withdrew most of the $9,554 deposited into the debit card account by obtaining ATM cash withdrawals. Based on a comparison of FEMA’s IHP payments and the list of debit card recipients, we found that over 5,000 of the 11,000 debit card recipients received more than one $2,000 expedited assistance payment because they received a debit card and another form of payment (check or EFT). According to FEMA officials, they were aware that several individuals had already registered for IHP assistance and that some payments had already been made prior to issuance of a debit card. However, FEMA officials stated that individuals in the three shelters in Texas would not have access to their home addresses or bank accounts and therefore needed immediate assistance in the form of debit cards. Our review of FEMA data disproved FEMA’s belief that only a few individuals who received debit cards also received other disaster assistance payments. Instead, thousands, or nearly half, of the individuals who received debit cards also received checks or EFTs that were made several days after the debit cards had been issued. The result was that FEMA paid more than $10 million dollars in duplicate expedited assistance payments to individuals who had already received their $2,000 of expedited assistance. In general, once FEMA receives a disaster registration, FEMA sends a package containing IHP information and detailed instructions, including instructions on how to follow up on benefits, how to appeal if denied benefits, and the proper use of IHP payments. However, FMS and FEMA officials informed us that FEMA did not specifically provide instructions on how the debit cards should only be used for necessary expenses and serious needs related to the disasters at the same time the debit cards were distributed. We found that in isolated instances, debit cards were used for adult entertainment, to purchase weapons, and for purchases at a massage parlor that had been previously raided by local police for prostitution. Our analysis of debit card transaction data provided by JP Morgan Chase found that the debit cards were used predominantly to obtain cash which did not allow us to determine how the money was actually used. The majority of the remaining transactions was associated with purchases of food, clothing, and personal necessities. Figure 3 shows a breakdown of the types of purchases made by cardholders. We found that in isolated instances, debit cards were used to purchase goods and services that did not appear to meet serious disaster related needs as defined by the regulations. In this regard, FEMA regulation provides that IHP assistance be used for housing-related needs and items or services that are essential to a registrant’s ability to overcome disaster related hardship. Table 2 details some of the debit cards activities we found that did not appear to be for essential disaster related items or services. FEMA has a substantial challenge in balancing the need to get money out quickly to those who are actually in need and sustaining public confidence in disaster programs by taking all possible steps to minimize fraud and abuse. Based on our work to date, we believe that more can be done to prevent fraud through validation of identities and damage addresses and enhanced use of automated system verification intended to prevent fraudulent disbursements. Once fraudulent registrations are made and money is disbursed, detecting and pursuing those who committed fraud in a comprehensive manner is more costly and may not result in recoveries. Further, many of those fraudulently registered in the FEMA system already received expedited assistance and will likely receive more money, as each registrant can receive as much as $26,200 per registration. Another key element to preventing fraud in the future is to ensure there are consequences for those that commit fraud. For the fraud cases that we are investigating, we plan to refer them to the Katrina Fraud Task Force for further investigation and, where appropriate, prosecution. We believe that prosecution of individuals who have obtained disaster relief payments through fraudulent means will send a message for future disasters that there are consequences for defrauding the government. Madam Chairman and Members of the Committee, this concludes my statement. I would be pleased to answer any questions that you or other members of the committee may have at this time. For further information about this testimony, please contact Gregory D. Kutz at (202) 512-7455 or kutzg@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. To assess controls in place over the Federal Emergency Management Agency (FEMA)’s Individuals and Households Program (IHP), we interviewed FEMA officials and performed walkthroughs at the National Processing Service Center in Winchester, Va. We reviewed the Stafford Act, Pub. L. 93-288, the implementing regulations, and FEMA’s instructions to disaster registrants available via the Internet. In addition, to proactively test controls in place, we applied for assistance using falsified identities, bogus addresses, and fictitious disaster stories to determine if IHP payments could be obtained based on fraudulent information. Because of several key unanswered requests for documentation from the Department of Homeland Security (DHS), information needed to fully assess the expedited assistance program was limited. For example, FEMA and DHS had not provided us documentation to enable us to conclusively determine the reason that FEMA submitted some registrations, and did not submit other registrations, to identity validation prior to issuing expedited assistance payments. Consequently, our work was limited to our analysis of the FEMA databases, investigations we conducted, data widely available to the public via the Internet, and information FEMA officials orally provided to us. To determine the magnitude and characteristics of IHP payments, we obtained the FEMA IHP database as of December 2005. We validated that the database was complete and reliable by comparing the total disbursements against reports FEMA provided to the Senate Appropriations Committee on Katrina/Rita disbursements. We summarized the amounts of IHP provided by type of assistance and by location of disaster address. To determine whether indications existed of fraud and abuse in expedited assistance and other disbursements, we provided FEMA data to the Social Security Administration (SSA) to verify against their records of valid social security numbers (SSNs). We also used data mining and forensic audit techniques to identify registrations containing obviously false data, such as multiple registrations containing the same name, same current or damaged address, but different SSNs, and registrations containing duplicate information, such as duplicate names and SSNs. To determine whether registrations from our data mining resulted in potentially fraudulent and/or improper payments, we used a nonrepresentative selection of 248 registrations representing 20 case studies (case studies included multiple individuals and registrations) for further investigation. We restricted our case studies to registrations that received payments as of mid-December 2005, and noted that some registrants within our case studies also submitted additional registrations—for which they may receive future payments. We also identified instances where groups of registrants may have been involved in schemes to defraud FEMA. We found these schemes because the registrants provided the same SSNs, last names, current addresses, and/or damaged addresses on their registrations. Our macro analysis of potentially fraudulent use of SSNs and other data mining are ongoing, and we plan to report additional results at a future date. For purposes of this testimony, we did not conduct sufficient work to project the magnitude of potentially fraudulent and improper payments of IHP. We also visited over 200 of the claimed damaged addresses related to our case studies to determine whether or not the addresses were valid. To assess the types of purchases made with FEMA debit cards distributed at relief centers, we reviewed a database of transactions provided by JP Morgan Chase, the administrating bank for the debit cards. SSA also assisted us to compare cardholder data with SSA records to determine whether registrants receiving debit cards had provided valid identities. We performed data mining on debit card transactions to identify purchases that did not appear to be indicative of necessary expenses as defined by the Stafford Act’s implementing regulations. Finally, we validated specific transactions identified in the database by obtaining information on actual items purchased from the vendors. In the course of our work, we made numerous written requests for key documents and sets of data related to the IHP, most dating back to October 2005. While FEMA officials promptly complied with one key part of our request—that is FEMA made available databases of IHP registrants and payments—the majority of items requested have not been provided. On January 18, 2006, the Department of Homeland Security Office of General Counsel provided us with well less than half of the documents that were requested. For example, FEMA and the DHS had not provided us documentation to enable us to conclusively determine the reason that FEMA submitted some registrations, and did not submit other registrations, to identity validation prior to issuing expedited assistance payments. While the database and other data provided by FEMA enabled us to design procedures to test the effectiveness of the FEMA’s system of internal controls, it did not enable us to comprehensively determine the root causes of weak or non-existent controls. During the course of our audit work, we identified multiple cases of potential fraud. For cases that we investigated and found significant evidence of fraudulent activity, we plan to refer our cases directly to the Hurricane Katrina Fraud Task Force. Except for scope limitations due to a lack of documentation provided by DHS, we performed our work from October 2005 through January 2006 in accordance with generally accepted government auditing standards and quality standards for investigations as set forth by the President’s Council on Integrity and Efficiency. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | As a result of widespread congressional and public interest in the federal response to hurricanes Katrina and Rita, GAO conducted an audit of the Individuals and Households Program (IHP) under Comptroller General of the United States statutory authority. Hurricanes Katrina and Rita destroyed homes and displaced millions of individuals. In the wake of these natural disasters, FEMA faced the challenge of providing assistance quickly and with minimal "red tape," while having sufficient controls to provide assurance that benefits were paid only to eligible individuals and households. In response to this challenge, FEMA provided $2,000 in IHP payments to affected households via its Expedited Assistance (EA) program. Victims who received EA may qualify for up to $26,200 in IHP assistance. As of mid-December 2005, IHP payments totaled about $5.4 billion, with $2.3 billion provided in the form of EA. These payments were made via checks, electronic fund transfers, and a small number of debit cards. GAO's testimony will provide the results to date related to whether (1) controls are in place and operating effectively to limit EA to qualified applicants, (2) indications exist of fraud and abuse in the application for and receipt of EA and other payments, and (3) controls are in place and operating effectively over debit cards to prevent duplicate EA payments and improper usage. We identified significant flaws in the process for registering disaster victims that leave the federal government vulnerable to fraud and abuse of EA payments. For Internet applications, limited automated controls were in place to verify a registrant's identity. However, we found no independent verification of the identity of registrants who registered for disaster assistance over the telephone. To demonstrate the vulnerability inherent in the call-in applications, we used falsified identities, bogus addresses, and fabricated disaster stories to register for IHP. We also found that FEMA's automated system frequently identified potentially fraudulent registrations, such as multiple registrations with identical social security numbers (SSN) but different addresses. However, the manual process used to review these registrations did not prevent EA and other payments from being issued. Other control weaknesses include the lack of any validation of damaged property addresses for both Internet and telephone registrations. Given the weak or non existent controls, it is not surprising that our data mining and investigations to date show the potential for substantial fraud and abuse of EA. Thousands of registrants misused SSNs, i.e., used SSNs that were never issued or belonged to deceased or other individuals. Our case study investigations of several hundred registrations also indicate significant misuse of SSNs and the use of bogus damaged property addresses. For example, our visits to over 200 of the case study damaged properties in Texas and Louisiana showed that at least 80 of these properties were bogus--including vacant lots and nonexistent apartments. We found that FEMA also made duplicate EA payments to about 5,000 of the nearly 11,000 debit card recipients--once through the distribution of debit cards and again by check or electronic funds transfer. We found that while debit cards were used predominantly to obtain cash, food, clothing, and personal necessities, a small number were used for adult entertainment, bail bond services and weapons purchase, which do not appear to be items or services that are essential to satisfy disaster related essential needs. |
When the Congress passed the Comprehensive Environmental Response and Compensation Act (CERCLA), commonly known as the Superfund law, in 1980, it established a trust fund (Superfund), financed primarily by taxes on crude oil and certain chemicals, for cleaning up highly contaminated hazardous waste sites. It also required EPA to develop a list of priorities for cleaning up the most hazardous waste sites, called the National Priorities List (NPL). In 1986, the Congress reauthorized Superfund and required EPA to meet certain cleanup schedules and to give preference to methods that permanently decontaminate sites. In 1990, the Congress again reauthorized Superfund, adding $5.1 billion to the program. In total, $15.2 billion has been authorized for the program. As of September 30, 1995, EPA had listed or proposed to list about 1,290 sites on the NPL, completed construction at about 304 sites, and deleted 84 sites from the list. The agency estimates that the average cost of cleaning up an NPL site, to the federal government or responsible parties, is $26 million. The Senate bill contains numerous provisions designed to address cleanup costs. We would now like to discuss some of the implications of the bill’s provisions. Several provisions in S. 1285 would elevate the role of risk and site-specific risk assessments in decisions about whether and how extensively a site should be cleaned up. We have reported that basing these decisions on environmental standards and generic assumptions about such things as the projected future use of a site, rather than actual data from the site, can lead to extensive and costly cleanups. EPA has recently introduced reforms to resolve some of these issues, but S. 1285 would take more extensive measures. One criticism of the current Superfund law is that it has resulted in some cleanups that were more extensive and costlier than were warranted by the health risks at sites. Such results may occur, in part, because the 1986 Superfund amendments require that cleanups comply with all “applicable” or “relevant and appropriate” standards set in other federal and state environmental laws, including certain standards set under the Safe Drinking Water Act and the Clean Water Act where relevant and appropriate. The Senate bill would require compliance only with “applicable” standards, that is, with those that directly pertain to hazardous waste cleanups, and it would eliminate the reference to the specific acts. The bill would also provide opportunities for the states to define their own applicable standards. The proposed legislation would allow EPA to waive the standards if reaching them, among other things, is technically infeasible or unreasonably costly. The extent to which this change affects cleanups will depend, in large part, on whether states establish their own cleanup standards and whether these standards differ significantly from those that have been used at cleanups to date. In a recent survey of states, we found that 21 of the 33 states we contacted had already set standards for groundwater or soil cleanups, or for both types, that specify numeric limits on acceptable concentrations of chemicals. Additionally, some states have general policies about cleanup, such as requirements that chemicals be limited to the levels that occur naturally in the immediate environment. For groundwater, 20 of the states had set numeric standards that were similar to the federal drinking water standards, although most of these states had set more stringent standards for a few chemicals. For soil, which has few federal standards, 13 of the 20 states had set their own cleanup standards. We did find, however, that the states were flexible in allowing exceptions to the cleanup levels required under the standards in order to account for conditions making it difficult or unnecessary to reach the standards. In states that have not established their own standards, site-specific risk assessments would play a more important role in determining the extent of Superfund cleanups. When we reviewed EPA’s data for 225 Superfund sites, we found that having used risk assessments instead of standards to determine the need for cleanups would not significantly have changed the number of sites cleaned up but would sometimes have changed the extent of the cleanups. To comply with the law, EPA had used federal and state standards rather than risk assessments to determine the extent of the cleanups at about three-fourths of the 139 sites in the database for which information on the basis for cleanup was available. If EPA had relied more on risk assessments, as S. 1285 would require, some of these cleanups might have been less extensive—and less expensive. This is because, as EPA program officials acknowledged, standards tend to require more stringent cleanups than risk assessments. Using risk assessments to determine cleanup levels without changing the risk assessment process itself could still result in more extensive cleanups than might be warranted at some sites. When EPA lacks specific data about a site, it makes assumptions in its estimates of risk about both the quantity of contaminants that will reach people and the toxicity of these contaminants. EPA tends to make relatively conservative assumptions, justifying this tendency on the basis that it has a mandate from the Congress to protect all individuals around Superfund sites. Critics argue that these assumptions are not realistic for all sites. The Senate proposal calls for the use of site-specific data and “realistic and plausible” assumptions about the risks posed by contaminants. For example, the bill calls for considering data about a site when deciding how the land at a site may be used in the future. Determining a site’s future use is key to estimating people’s future exposure to contaminants at the site, which, in turn, helps to determine the level of cleanup required for the site. We found that when EPA lacked specific data on a site’s future use, it adopted the assumptions that would be the most protective of human health, namely, that the land would be used for residential rather than commercial or industrial purposes. Assuming future residential use can lead to estimates of health risks that warrant cleaning up a site immediately. In reviewing EPA’s data for 225 Superfund sites, we found that at about half of the 190 sites where EPA had decided cleanup was necessary, the health risks were ranked as high not because of the land’s current use but because EPA had assumed the land’s use would change in the future. EPA recognizes that this assumption leads to costlier cleanups, and last year the agency decided to use more site-specific data when deciding what assumptions to make about a site’s future uses. The proposed legislation would go farther to incorporate the assumptions about future land use in cleanup decisions. Given that both the government and private industry have spent billions of dollars to date on the Superfund program but significant numbers of sites remain to be cleaned up, it is important to achieve the maximum amount of environmental protection from the available federal resources. We reported in 1994 that although EPA had adopted a policy of addressing the worst sites first, its regional offices had set priorities using such factors as the amount of work needed to evaluate a site instead of considering the site’s health and environmental risks. Recently, in response to expected budget reductions, EPA convened a panel to help rank NPL sites nationwide on the basis of risk and other factors. In the past, EPA has taken similar actions when resources have been limited, but its efforts have been short-term. We have also reported that national risk-based priority-setting systems have not been fully implemented at the Departments of Defense and Energy. Of the hundreds of federally owned hazardous waste sites, only eight have been cleaned up so far. Although most of the work remains to be done, agencies’ budgets for the federal cleanup and compliance effort, whose costs may ultimately total $400 billion, have been declining. By basing cleanup priorities largely on the relative risks of sites, agencies could ensure that funds are effectively allocated. Both the Congress and EPA are concerned about the high costs of cleanups and are trying to curb these costs. The current law’s requirements that cleanup remedies comply with federal and state environmental standards and permanently treat waste have limited the alternatives available to cut costs. EPA now plans to review any remedies that exceed certain cost thresholds to determine whether lower-cost alternatives are available. The agency is also evaluating the results of a 1992 initiative that uses EPA’s emergency response, or removal, authority to clean up portions of sites more quickly and at less cost. The Senate bill would also address costs by eliminating the preference for permanent treatments of waste, thereby allowing for greater consideration of remedies that rely on the containment of waste than exists under current law. The bill would also increase the current law’s dollar and time limits on federally funded removals. These changes would facilitate EPA’s use, where appropriate, of removal actions, which are faster and less costly than EPA’s traditional cleanup processes. After the 1986 amendments to CERCLA established a preference for permanently treating wastes, EPA increasingly selected permanent measures, such as incinerating contaminated soil, rather than containment measures, such as fencing it off from human contact. This preference, in turn, increased shorter-term cleanup costs because constructing treatment technologies is more expensive than installing containment measures. Currently, about half of all cleanup plans include permanent treatments. The bill would eliminate the preference for permanence, thereby allowing the expanded use of containment options. Such options include implementing lower-cost engineering controls (like waterproof covers to contain rather than clean up waste) and institutional controls (like land-use restrictions), as long as they protect human health and the environment. However, while the costs to implement these remedies might be lower, they would require long-term monitoring and maintenance to ensure that they remain protective. We estimate that the average cost of operating and maintaining a site with contained waste could be $5 million over 30 years. We also estimate that during this period, overall operation and maintenance costs to the federal government, states, and responsible parties could be $5 billion, $8 billion, and $18 billion, respectively. We recently reported EPA had identified several sites where alternatives to treatment had been used and problems had developed, requiring additional work. In response to criticism that cleanups were costing too much and taking too long, EPA implemented its Superfund Accelerated Cleanup Model in 1992. One of the model’s initiatives was for EPA to expand its use of non-time-critical removals—actions the agency typically uses to clean up portions of sites requiring urgent, but not emergency, treatment. These non-time-critical removals result in quicker cleanups than actions taken under EPA’s traditional remedial program because they streamline the steps used to study a site’s contamination and design a cleanup method. The Senate bill would raise the current law’s dollar and time limits on federally funded removal actions. We recently reported that EPA could potentially use these removals at portions of the 1,000 sites currently awaiting cleanup on the NPL, as well as at portions of the estimated 2,000 additional sites that could be listed.Typically, for these portions, EPA is more certain of the types of contamination present and the appropriate methods to address it, and the agency does not need to conduct extensive studies and designs before taking action. EPA site managers estimate that the non-time-critical removals conducted to date have reduced cleanup time—from 4 years to 2 years, on average—and saved money—cutting $500,000 from an average total cleanup cost of $4.1 million per site. By addressing contamination sooner, these actions also can reduce risks to public health and prevent contamination from spreading farther in the environment. However, the current legal time and dollar limits on these actions constrain the use of these removals at federally funded sites. Raising the limits from 12 months to 24 months and from $2 million to $4 million, as S. 1285 provides, would allow EPA to use these removals more easily at portions of many Superfund sites. Although EPA has assessed and cleaned up some NPL sites, thousands of other sites need to be addressed. The bill would authorize EPA to delegate some of its responsibilities for cleanups at NPL sites to qualified states. To promote faster and less costly cleanups, the bill would provide financial and technical assistance to states to set up programs through which private parties would voluntarily clean up sites under a state’s supervision. At this Committee’s request, we are currently reviewing several programs from among the 31 states with such programs to identify their best practices, including streamlining cleanups, creating financial incentives for redevelopment, and protecting property owners from further liability for contamination. In other work for this Committee, we are addressing the redevelopment of abandoned or underutilized contaminated urban properties, known as brownfields. Our work shows that the bill—in limiting the liability of lenders, property owners, and prospective purchasers—would help to remove barriers to the properties’ redevelopment. In addition, the loans that would be provided under the bill to municipalities would cover the up-front costs at most brownfield sites of assessing the sites for contamination and cleaning them up. These measures would help reduce the uncertainty that currently makes these sites unattractive to developers. Whether states have sufficient resources to implement the Superfund program is an issue in transferring the federal government’s responsibility for the program to them. For example, one provision in the bill would allow only 125 more sites to be added to the NPL. Our recent work shows that under this limit, the states could acquire responsibility for the 1,400 to 2,300 potential NPL sites whose cleanups could otherwise have been funded out of the Superfund trust fund, at a potential cost of $8 billion to $20 billion. Seven of the eight states whose programs we studied were concerned about their financial ability to manage these additional cleanups, given their current level of funding for environmental restoration. The additional cleanup costs they could face under a capped NPL would depend on whether and how quickly they decided to address the additional sites. We have previously reported that parties involved at Superfund sites incur high legal expenses to resolve their liability for contamination and allocate cleanup expenses among one another. The Senate bill would change the current liability rules and establish a nonbinding process to allocate costs at some sites. Our work in this area has documented the extent and causes of responsible parties’ legal expenses. Our 1994 survey of the Superfund legal expenses of Fortune 500 Industrial and Service corporations showed that about half of the respondents had been involved at Superfund sites and had spent a median of $1.5 million at each site. One-third of their Superfund costs were legal expenses, incurred primarily in allocating the responsibility for cleanup costs among the responsible parties. For de minimis parties, that is, those responsible for minor contamination at a site, legal expenses constituted almost half of the total Superfund costs. The corporations attributed their high legal expenses to EPA’s not identifying all responsible parties or taking action against all parties that had been identified. When parties identified by EPA believe that others are also responsible but have not been pursued by EPA, they will often sue these other parties themselves for a contribution to the cleanup costs. The defendants in these contribution suits are sometimes responsible for only small amounts of waste at sites. Seventy-one percent of the respondents to our survey said that they were parties to these contribution suits. Provisions in S. 1285 would address these issues. The bill’s allocation process would allow parties to submit information to EPA about others they believe should share in the costs. De minimis parties, defined as those who contributed relatively small amounts of hazardous waste at a site, would be exempt from liability. In addition, certain civic or charitable organizations would have limited liability. Under the bill’s allocation process, the government would cover the costs attributed to parties that cannot pay or are exempt from liability, as well as other costs. In addition to their cleanup obligations, responsible parties are liable for damages to natural resources caused by contamination. The bill would limit natural resource damage claims under CERCLA. At this Committee’s request, we have determined the amount of past federal damage settlements and estimated the potential for future federal claims. We reported that settlements to date and estimated future claims have been or are likely to be limited to a relatively small number of sites. However, some future claims could be large. Through April 1995, the Department of the Interior and the National Oceanic and Atmospheric Administration, the principal federal agencies with trustee responsibility for natural resources, together had reached 98 settlements, about half of which involved cash payments to these agencies by responsible parties totaling $106 million. The median amount of the settlements requiring payment was $200,000—a small figure compared with the current average cost to clean up a site of about $26 million. Eleven settlements required payments of over $1 million. Officials from the two agencies estimate that the claims for up to 20 of their pending and future cases may eventually exceed $50 million each and that the claims for up to another 40 cases may range between $5 million and $50 million each. According to Department of Justice officials involved in these claims, the number of future cases is likely to be limited by a shortage of enforcement resources and the difficulty of establishing responsibility for damages. The Senate bill would authorize the use of new, potentially cost-reducing technologies at certain federally owned hazardous waste sites. The bill would authorize the President to designate specific federal facilities as sites for testing innovative technologies and authorize the EPA Administrator to approve their use at these sites. Our reports have shown that although EPA and the Departments of Defense and Energy have spent substantial sums to develop waste cleanup technologies, few such technologies have been used in cleanups. Even when a new technology has been successfully demonstrated, we found, agencies are often reluctant to try it because of its unfamiliarity or other reasons. This new authority may help to overcome the resistance we found to these technologies. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO discussed how the proposed Accelerated Cleanup and Environmental Restoration Act would affect the Superfund Program's reauthorization. GAO noted that the legislation would: (1) require that cleanups comply only with standards that pertain to hazardous waste cleanups, rather than with water quality standards; (2) allow states to define their own standards and permit the Environmental Protection Agency (EPA) to waive those standards; (3) require that EPA use site-specific data and less conservative assumptions when assessing cleanup sites; (4) require that cleanup sites be ranked by risk; (5) allow the expanded use of low-cost hazardous waste containment measures at cleanup sites; (6) relax the restrictions on non-time-critical removals, which can speed the cleanup process; (7) provide assistance to states to establish programs through which private parties would voluntarily clean up sites; (8) restrict the number of additional sites that could be added to the National Priorities List (NPL); (9) shift the financial burden of the Superfund program from the federal government to state governments; (10) limit the liability of responsible parties at Superfund sites, establish a nonbinding process to distribute the cost of cleanups, and decrease liability for natural resource damage; and (11) mandate the testing and use of new, potentially cost-reducing technologies such as bioremediation at cleanup sites. |
Many of our reports and testimonies include recommendations that, if acted upon, may result in tangible benefits for the U.S. taxpayer by improving the federal government’s efficiency, effectiveness, and accountability. Implemented recommendations can result in financial or nonfinancial benefits for the federal government. An estimated financial benefit is based on actions taken in response to our recommendations, such as reducing government expenditures, increasing revenues, or reallocating funds to other areas. For example, in fiscal year 2015, our work resulted in $74.4 billion in financial benefits across the federal government. Other benefits that result from our work cannot be measured in dollar terms, and we refer to them as nonfinancial or other benefits. These benefits are linked to specific recommendations or other work that we completed over several years and could include improvements to agency programs, processes, and policies. During fiscal year 2015, we recorded a total of 1,286 other benefits government-wide that resulted from our work, including improved services to the public and government business operations. As part of our responsibilities under generally accepted government auditing standards, we follow up on recommendations we have made and report their status to Congress. Agencies also have a responsibility to monitor and maintain accurate records on the status of our recommendations. After issuing a report, we follow up with reviewed agencies at least once a year to determine the extent to which our recommendations have been implemented and the benefits that have been realized. During this follow-up, we identify what additional actions, if any, would be needed to address our recommendations. A recommendation is considered implemented when actions have been taken that, consistent with our recommendation, address the issue or deficiency we identified and upon which the recommendation is based. Experience has shown that it takes time for some recommendations to be implemented. For this reason, we actively track each unaddressed (i.e., open) recommendation for 4 years and review them to determine whether implementation can be reasonably expected. The review includes consideration of alternative strategies the agency may have for implementing the recommendations. We will close the recommendation as not implemented if TSA has indicated that it was not taking action or we determined that it was unlikely that TSA would take action to close these recommendations. We maintain a publicly available database with information on the current status of most open recommendations. The database allows searches by agency, congressional committee, or key words and is available at http://www.gao.gov/openrecs.html. TSA has implemented 51 of the 58 recommendations we made from October 1, 2003, through July 31, 2015, intended to improve TSA’s acquisition of security-related technology, and has not implemented the remaining 7 recommendations (see fig. 1). The 58 recommendations relating to the acquisition of security-related technology fall into three general categories: actions requiring the agency to (1) develop a plan, process, protocol, or strategy; (2) implement a plan, program, policy, procedure, or best practice; and (3) conduct a test, study, or analysis. For example, in 2011 we recommended that TSA develop a process to communicate information to EDS vendors in a timely manner regarding EDS acquisition and to ensure that TSA takes a comprehensive and cost-effective approach to procuring and deploying EDS that meet the 2010 explosives detection requirements and any subsequent revisions. Separately, in 2008 we recommended that TSA fully incorporate best practices into developing Secure Flight life-cycle cost and schedule estimates and develop a plan for mitigating cost and schedule risks, among other things. In 2009, we recommended that TSA conduct a cost-benefit analysis to assist in prioritizing investments in new checkpoint screening technologies to help TSA take a comprehensive, risk-informed approach to procuring and deploying airport passenger screening technologies. As shown in figure 2, we identified 31 recommendations for the first category, 7 recommendations for the second category, and 20 recommendations for the third category. TSA has not implemented 7 recommendations we made from fiscal year 2003 through July 31, 2015. Four of the 7 recommendations have been closed, while 3 remain open. The 3 open recommendations are focused on improving AIT operations, while the 4 closed recommendations were related to establishing the effectiveness of canine screening, conducting technical assessments to strengthen airport perimeter security, and AIT operations. For example: In 2014, we recommended that TSA establish protocols to facilitate capturing operational data on secondary passenger screening at the checkpoint to determine the extent to which rates of false alarms for various AIT systems affect operational costs once Advanced Imaging Technology-Automated Target Recognition (AIT-ATR) systems are networked. TSA concurred with this recommendation. In its comments on our report, TSA stated that it would monitor, update, and report the results of its efforts to capture operational data and evaluate its associated impacts on operational costs. When contacted in November 2015 for an update on this recommendation, TSA officials stated that they have taken steps toward implementing this recommendation by evaluating the impact of false alarm rates on operational costs (such as staffing) during testing for new AIT systems. This recommendation remains open pending additional actions by TSA to collect secondary screening data on an ongoing basis, which could be used to obtain valuable insights on false alarm rates and the resulting operational costs. By fully implementing the recommendation, TSA could improve the overall performance of the AIT system and make more informed decisions about checkpoint screening. In 2013, we recommended that TSA expand and complete testing, in conjunction with the DHS Science and Technology Directorate, to assess the effectiveness of passenger screening canines (PSC) and conventional canines in all areas of an airport deemed appropriate by TSA before deploying more passenger screening canine teams to help (a) determine whether PSCs are effective at screening passengers, and expenditures for PSC training are warranted and (b) inform decisions about the type of canine teams to deploy and their optimal locations in airports. TSA concurred with the recommendation and took some action, but did not fully address the recommendation. Specifically, in June 2014, TSA reported that it had assessed PSC teams deployed to 27 airports, cumulating in a total of 1,048 tests. On the basis of these tests, TSA determined that PSC teams are effective and should be deployed at the checkpoint queue. However, when contacted in December 2014, officials reported that they did not plan to expand or complete testing to compare the effectiveness PSCs with the effectiveness of conventional canine teams as we recommended, citing concerns about potential liability in using conventional canines that have not been evaluated for their suitability for screening passengers in an unfamiliar passenger screening environment and the related risks to the program. We disagreed and pointed out that conventional canines paired with handlers already work in proximity with passengers since they patrol airport terminals, including ticket counters and curbside areas. Given that TSA does not plan on taking further action on this program, we closed the recommendation as not implemented. However, we continue to believe that the recommendation has merit and should be fully implemented. Since fiscal year 2003, we have identified approximately $1.7 billion in financial benefits, largely representing funds that TSA used to support other programs and activities, based on implementation of our recommendations as well as findings from our related reports and testimonies on security-related technology acquisitions. We have also identified additional benefits, including programmatic and process improvements to TSA’s programs stemming from implementation of our recommendations and related work. The following are examples of the financial, programmatic, and process benefits we identified. In January 2012, we issued a report on TSA’s adherence to DHS acquisition policy and efforts to test the effectiveness of AIT. Among other things, we reported on the effectiveness of the systems and recommended that TSA brief Congress. Congressional response to a TSA briefing combined with our body of work on AIT resulted in TSA’s decision to reduce the number of planned AIT purchases amounting to approximately $1.4 billion. In several reports and testimonies in fiscal year 2004, we reported on delays and challenges in TSA’s development of the Computer- Assisted Passenger Prescreening System II (CAPPS II). We found that TSA had not fulfilled statutory requirements concerning the development and operations of CAPPS II. For example, TSA had not determined the accuracy of the databases that would be used to prescreen passengers, and had not conducted tests that would stress the program and ensure system functionality. We made several recommendations relating to developing project plans, including schedules and estimated costs, conducting system testing; and developing a process by which passengers can get erroneous information corrected, among others. In part because of our initial and subsequent evaluations, and congressional oversight hearings, TSA reprogramed approximately $46 million in funding for CAPPS II to other TSA activities in fiscal years 2003 through 2005. TSA canceled CAPPS II development in August 2004 and, shortly after that, announced plans to develop a successor passenger prescreening program called Secure Flight. The projected program funding for the canceled CAPPS II program resulted in total programmatic savings of approximately $304 million. In April 2012, we found that TSA had established cost estimates for the EBSP to help identify total program cost, recapitalization cost, and potential savings resulting from installing optimal systems, but its processes for developing these estimates did not fully comply with best practices. We recommended that in order to strengthen the credibility, comprehensiveness, and reliability of TSA’s cost estimates and related savings estimates for EBSP, TSA should follow cost- estimating best practices. In response, TSA implemented a management directive that applies DHS guidance and best practices from the GAO Cost Estimating and Assessment Guide and updated its cost estimating best practices to include four characteristics of a high quality and reliable cost estimate—comprehensive, well- documented, accurate, and credible. By implementing this recommendation, TSA improved its ability to determine the cost of the program and plan for the resources required to develop and manage the EBSP. In October 2009, we found that TSA had completed a strategic plan to guide research, development, and deployment of passenger checkpoint screening technologies; however, the plan was not risk- based. We recommended that TSA take a comprehensive, risk- informed approach to procuring and deploying technologies for airport passenger checkpoint screening. Specifically, we recommended that, to the extent feasible, TSA should complete operational tests and evaluations before deploying screening technologies to airport checkpoints. In response to our recommendation, in March 2010, TSA updated its Aviation Modal Risk Assessment, which included a comprehensive risk assessment of the potential for a terrorist attack and implemented a test and evaluation process for all of its technology procurements in accordance with DHS policy. TSA’s actions increased its ability to successfully procure and deploy passenger and checkpoint screening technologies. In 2011, we found that TSA did not effectively communicate in a timely manner with vendors competing for EDS procurement. To help ensure that TSA takes a comprehensive and cost-effective approach to procuring and deploying EDS, in July 2011 we recommended that TSA establish a process to communicate information to EDS vendors in a timely manner about TSA’s EDS acquisitions, including changes to the procurement schedule. In April 2012, TSA provided information on a number of actions it had taken to improve communication with EDS vendors, such as issuing 16 public notifications that contained projected schedules and program updates. In October 2012, TSA finalized its Explosives Detection System Competitive Procurement Qualification Program Communications Plan, which established a process for more timely communication with vendors competing for EDS procurements. In addition, TSA used a qualified products list in its EBSP acquisition plan that awarded contracts only to precertified vendors. By establishing a process to communicate with vendors in a timely manner, TSA was better positioned to avoid delays and procurement cost overruns. In May 2004, we found that TSA’s Office of Acquisition was at an organizational level too low to effectively oversee the acquisition process, coordinate acquisition activities, and enforce acquisition policies effectively. The position of the office hindered its ability to help ensure that TSA follows acquisition processes that enable the agency to get the best value on goods and services. We recommended that TSA elevate the Office of Acquisition to an appropriate level within TSA to enable it to identify, analyze, prioritize, and coordinate agency- wide acquisition needs. In October 2004, TSA elevated its Office of Acquisition to report directly to the Deputy Administrator of TSA. The Office of Acquisition instituted an outreach program to provide acquisition expertise to program offices on each area of the acquisition life cycle. TSA also issued an Investment Review Process guide in January 2005 that outlines acquisition personnel roles, responsibilities, and procedures for conducting acquisition program reviews at each key decision point. By taking these actions, DHS and TSA were better positioned to make TSA’s acquisition process more efficient and improved TSA’s ability to implement its acquisition policies and procedures. We provided a draft of this report to DHS for review and comment. DHS did not provide formal comments but provided technical comments from TSA which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees and the Secretary of Homeland Security. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7141 or GroverJ@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Recommendation To address the challenges associated with the development, implementation, and operation of the Computer Assisted Passenger Prescreening System (CAPPS II), the Secretary of Homeland Security should instruct the Administrator of the Transportation Security Administration (TSA) to develop plans identifying the specific functionality that will be delivered during each increment of CAPPS II, the specific milestones for delivering this functionality, and expected costs for each increment. To address the challenges associated with the development, implementation, and operation of CAPPS II, the Secretary of Homeland Security should instruct the Administrator of TSA to use established plans to track development progress to ensure that promised functionality is being delivered on time and within established cost estimates. To address the challenges associated with the development, implementation, and operation of CAPPS II, the Secretary of Homeland Security should instruct the Administrator of TSA to develop a schedule for critical security activities, including finalizing the security policy, the security risk assessment, and system certification and accreditation. Recommendation To address the challenges associated with the development, implementation, and operation of CAPPS II, the Secretary of Homeland Security should instruct the Administrator of TSA to develop a strategy for mitigating the high risk associated with system and database testing that ensures (1) accuracy testing of commercial and government databases is conducted prior to the database being used and (2) appropriate stress testing is conducted to demonstrate the system can meet peak load requirements. To address the challenges associated with the development, implementation, and operation of CAPPS II, the Secretary of Homeland Security should instruct the Administrator of TSA to develop results-oriented performance goals and measures to evaluate the program’s effectiveness, including measures to assess performance of the system in generating reliable risk scores. To help ensure that TSA receives the goods and services it needs at the best value to the government, the Secretary of Homeland Security should direct the Administrator of TSA to elevate the Office of Acquisition to an appropriate level within TSA to enable it to identify, analyze, prioritize, and coordinate agencywide acquisition needs. Recommendation To help ensure that TSA receives the goods and services it needs at the best value to the government, the Secretary of Homeland Security should direct the Administrator of TSA to develop an adequate system of internal controls, performance measures, and incentives to ensure that policies and processes for ensuring efficient and effective acquisitions are implemented appropriately. To help ensure that TSA receives the goods and services it needs at the best value to the government, the Secretary of Homeland Security should direct the Administrator of TSA to direct the TSA Human Capital Office to do the following in coordination with key offices in the Department of Homeland Security: (1) assess TSA’s current acquisition workforce (as defined by the Department of Homeland Security) to determine the number, skills, and competencies of the workforce; (2) identify any gaps in the number, skills, and competencies of the current acquisition workforce; and (3) develop strategies to address any gaps identified, including plans to attract, retain, and train the workforce. The Secretary of Homeland Security should ensure that its planned departmentwide knowledge management system provides TSA sufficient data and analytic capability to measure and analyze spending activities and performance— and thereby highlight opportunities to reduce costs and improve service levels. Recommendation The Secretary of Homeland Security should ensure that its planned departmentwide knowledge management system provides TSA sufficient data and analytic capability to support effective oversight of acquisitions. To help ensure that TSA is able to articulate and justify future decisions on how best to proceed with security evaluations, fund and implement security improvements—including new security technologies—and implement additional measures to reduce the potential security risks posed by airport workers, the Secretary of Homeland Security should direct TSA’s Administrator to develop and provide Congress with a plan for meeting the requirements of the Aviation Transportation Security Act (ATSA). The Secretary of Homeland Security should direct TSA’s Administrator to develop and provide Congress with a plan for meeting the requirements of ATSA by conducting assessments of technology, compile the results of these assessments as well as assessments conducted independently by airport operators, and communicate the integrated results of these assessments to airport operators. Recommendation The Secretary of Homeland Security should direct TSA’s Administrator to develop and provide Congress with a plan for meeting the requirements of ATSA by using the information resulting from the security evaluation and technology assessment efforts cited above as a basis for providing guidance and prioritizing funding to airports for enhancing the security of the commercial airport system as a whole. In developing the comprehensive plan for installing in-line explosives detection systems (EDS) baggage screening systems, as directed by the fiscal year 2005 DHS Appropriation Act Conference Report, and in satisfying the requirements set forth in the Intelligence Reform and Terrorism Prevention Act of 2004, the Secretary of the Department of Homeland Security should direct the Administrator of TSA to systematically assess the costs and benefits of deploying in-line baggage screening systems at airports that do not yet have in- line systems installed. As part of this assessment, the Administrator should identify and prioritize the airports where the benefits—in terms of cost savings of baggage screening operations and improved security—of replacing stand- alone baggage screening systems with in-line systems are likely to exceed the costs of the systems, or the systems are needed to address security risks or related factors. Recommendation In developing the comprehensive plan for installing in-line EDS baggage screening systems, as directed by the fiscal year 2005 DHS Appropriation Act Conference Report, and in satisfying the requirements set forth in the Intelligence Reform and Terrorism Prevention Act of 2004, the Secretary of the Department of Homeland Security should direct the Administrator of TSA to systematically assess the costs and benefits of deploying in-line baggage screening systems at airports that do not yet have in- line systems installed. As part of this assessment, the Administrator should consider the projected availability and costs of baggage screening equipment being developed through research and development efforts. Recommendation In developing the comprehensive plan for installing in-line EDS baggage screening systems, as directed by the fiscal year 2005 DHS Appropriation Act Conference Report, and in satisfying the requirements set forth in the Intelligence Reform and Terrorism Prevention Act of 2004, the Secretary of the Department of Homeland Security should direct the Administrator of TSA to systematically assess the costs and benefits of deploying in-line baggage screening systems at airports that do not yet have in- line systems installed. As part of this assessment, the Administrator should estimate total funds needed to install in- line systems where appropriate, including the federal funds needed given different assumptions regarding the federal government and airport cost-shares for funding the in- line systems. Recommendation In developing the comprehensive plan for installing in-line EDS baggage screening systems, as directed by the fiscal year 2005 DHS Appropriation Act Conference Report, and in satisfying the requirements set forth in the Intelligence Reform and Terrorism Prevention Act of 2004, the Secretary of the Department of Homeland Security should direct the TSA Administrator to systematically assess the costs and benefits of deploying in-line baggage screening systems at airports that do not yet have in-line systems installed. As part of this assessment, the Administrator should work collaboratively with airport operators, who are expected to share the costs and benefits of in-line systems, to collect data and prepare the analyses needed to develop plans for installing in-line systems. Systematic Planning Needed to Optimize the Deployment of Checked Baggage Screening Systems assess the feasibility, expected benefits, and costs of replacing explosives trace detection (ETD) machines with stand- alone EDS machines for primary screening at those airports where in-line systems would not be either economically justified or justified for other reasons. In conducting this assessment, the Administrator should consider the projected availability and costs for screening equipment being developed through research and development efforts. Issue Date March 28, 2005 To help manage risks associated with Secure Flight’s continued development and implementation, and to assist TSA in developing a framework from which to support its efforts in addressing congressional areas of interest outlined in Public Law 108-334, the Secretary of the Department of Homeland Security should direct the Assistant Secretary of TSA, to finalize the system requirements document and the concept of operations, and develop detailed test plans to help ensure that all Secure Flight system functionality is properly tested and evaluated. These system documents should address all system functionality and include system stress test requirements. March 28, 2005 To help manage risks associated with Secure Flight’s continued development and implementation, and to assist TSA in developing a framework from which to support its efforts in addressing congressional areas of interest outlined in Public Law 108-334, the Secretary of the Department of Homeland Security should direct the Assistant Secretary of TSA to develop a plan for establishing connectivity among the air carriers, U.S. Customs and Border Protection, and TSA to help ensure the secure, effective, and timely transmission of data for use in Secure Flight operations. Issue Date March 28, 2005 To help manage risks associated with Secure Flight’s continued development and implementation, and to assist TSA in developing a framework from which to support its efforts in addressing congressional areas of interest outlined in Public Law 108-334, the Secretary of the Department of Homeland Security should direct the Assistant Secretary of TSA, to develop reliable life- cycle cost estimates and expenditure plans for Secure Flight—in accordance with guidance issued by the Office of Management and Budget— to provide program managers and oversight officials with information needed to make informed decisions regarding program development and resource allocations. March 28, 2005 To help manage risks associated with Secure Flight’s continued development and implementation, and to assist TSA in developing a framework from which to support its efforts in addressing congressional areas of interest outlined in Public Law 108-334, the Secretary of the Department of Homeland Security should direct the Assistant Secretary of TSA, to develop results- oriented performance goals and measures to evaluate the effectiveness of Secure Flight in achieving intended results in an operational environment— as outlined in the Government Performance and Results Act— including measures to assess associated impacts on aviation security. Issue Date March 28, 2005 To help manage risks associated with Secure Flight’s continued development and implementation, and to assist TSA in developing a framework from which to support its efforts in addressing congressional areas of interest outlined in Public Law 108-334, the Secretary of the Department of Homeland Security should direct the Assistant Secretary of TSA to, prior to achieving initial operational capability, finalize policies and issue associated documentation specifying how the Secure Flight program will protect personal privacy, including addressing how the program will comply with the requirements of the Privacy Act of 1974 and related legislation. March 28, 2005 To help manage risks associated with Secure Flight’s continued development and implementation, and to assist TSA in developing a framework from which to support its efforts in addressing congressional areas of interest outlined in Public Law 108-334, the Secretary of the Department of Homeland Security should direct the Assistant Secretary of TSA to, prior to achieving initial operational capability, finalize policies and procedures detailing the Secure Flight passenger redress process, including defining the appeal rights of passengers and their ability to access and correct personal data. Recommendation To help improve TSA’s management of EDS and ETD maintenance costs and strengthen oversight of contract performance, the Secretary of Homeland Security should instruct the Assistant Secretary of TSA to establish a timeline to complete its evaluation and close out the Boeing contract and report to congressional appropriations committees on its actions, including any necessary analysis, to address the Department of Homeland Security Office of Inspector General’s recommendation to recover any excessive fees awarded to Boeing Service Company. To help improve TSA’s management of EDS and ETD maintenance costs and strengthen oversight of contract performance, the Secretary of Homeland Security should instruct the Assistant Secretary of TSA to establish a timeline for completing life-cycle cost models for EDS, which TSA recently began. Recommendation To help improve TSA’s management of EDS and ETD maintenance costs and strengthen oversight of contract performance, the Secretary of Homeland Security should instruct the Assistant Secretary of TSA to revise policies and procedures to require documentation of the monitoring of EDS and ETD maintenance contracts to provide reasonable assurance that contractor maintenance cost data and performance data are recorded and reported in accordance with TSA contractual requirements and self-reported contractor mean downtime data are valid, reliable, and justify the full payment of the contract amount. To assist TSA in further strengthening the development and implementation of the Secure Flight program, the Secretary of Homeland Security should direct the Assistant Secretary of TSA to fully incorporate best practices into the development of Secure Flight life-cycle cost and schedule estimates, to include: (1) updating life-cycle cost and schedule estimates; (2) demonstrating that the Secure Flight schedule has the logic in place to identify the critical path, integrates lower level activities in a logical manner, and identifies the level of confidence in meeting the desired end date; and (3) developing and implementing a plan for managing and mitigating cost and schedule risks, including performing a schedule risk analysis and a cost and schedule risk assessment. Recommendation To assist TSA in further strengthening the development and implementation of the Secure Flight program, the Secretary of Homeland Security should direct the Assistant Secretary of TSA to fully implement the provisions in the program’s risk management plan to include developing an inventory of risks with prioritization and mitigation strategies, report the status of risks and progress to management, and maintain documentation of these efforts. To assist TSA in further strengthening the development and implementation of the Secure Flight program, the Secretary of Homeland Security should direct the Assistant Secretary of TSA to finalize and approve Secure Flight’s end-to- end testing strategy, and incorporate end-to-end testing requirements in other relevant test plans, to include the test and evaluation master plan. The strategy and plans should contain provisions for: (1) testing that ensures that the interrelated systems that collectively support Secure Flight will interoperate as intended in an operational environment; and (2) defining and setting dates for key milestone activities and identifying who is responsible for completing each of those milestones and when. Recommendation To mitigate future risks of performance shortfalls and strengthen management of the Secure Flight program moving forward, the Secretary of Homeland Security should direct the Assistant Secretary of TSA to periodically assess the performance of the Secure Flight system’s matching capabilities and results to determine whether the system is accurately matching watch- listed individuals while minimizing the number of false positives—consistent with the goals of the program; document how this assessment will be conducted and how its results will be measured; and use these results to determine whether the system settings should be modified. October 7, 2009 To help ensure that DHS’s Science and Technology Directorate (S&T) and TSA take a comprehensive, risk-informed approach to the research, development, test and evaluation (RDT&E), procurement, and deployment of airport passenger checkpoint screening technologies, and to increase the likelihood of successful procurements and deployments of such technologies, in the restricted version of this report, we recommended that the Assistant Secretary for TSA should conduct a complete risk assessment, including threat, vulnerability, and consequence assessments, which would apply to the Passenger Screening Program (PSP). Category Conduct a test, study, or analysis and TSA take a comprehensive, risk-informed approach to the RDT&E, procurement, and deployment of airport passenger checkpoint screening technologies, and to increase the likelihood of successful procurements and deployments of such technologies, in the restricted version of this report, we recommended that the Assistant Secretary for TSA should develop cost-benefit analyses to assist in prioritizing investments in new checkpoint screening technologies. October 7, 2009 To help ensure that DHS’s S&T and TSA take a comprehensive, risk-informed approach to the RDT&E, procurement, and deployment of airport passenger checkpoint screening technologies, and to increase the likelihood of successful procurements and deployments of such technologies, in the restricted version of this report, we recommended that the Assistant Secretary for TSA should develop quantifiable performance measures to assess the extent to which investments in research, development, and deployment of checkpoint screening technologies achieve performance goals for enhancing security at airport passenger checkpoints. Category Conduct a test, study, or analysis and TSA take a comprehensive, risk-informed approach to the RDT&E, procurement, and deployment of airport passenger checkpoint screening technologies, and to increase the likelihood of successful procurements and deployments of such technologies, in the restricted version of this report, we recommended that the Assistant Secretary for TSA should, after conducting a complete risk assessment and completing cost-benefit analyses and quantifiable performance measures for the PSP, incorporate the results of these efforts into the PSP strategy as determined appropriate. October 7, 2009 To help ensure that DHS’s S&T and TSA take a comprehensive, risk-informed approach to the RDT&E, procurement, and deployment of airport passenger checkpoint screening technologies, and to increase the likelihood of successful procurements and deployments of such technologies, in the restricted version of this report, we recommended that the Assistant Secretary for TSA should, to the extent feasible, ensure that operational tests and evaluations have been successfully completed before deploying checkpoint screening technologies to airport checkpoints. Category Conduct a test, study, or analysis and TSA take a comprehensive, risk-informed approach to the RDT&E, procurement, and deployment of airport passenger checkpoint screening technologies, and to increase the likelihood of successful procurements and deployments of such technologies, in the restricted version of this report, we recommended that the Assistant Secretary for TSA should evaluate whether TSA’s current passenger screening procedures should be revised to require the use of appropriate screening procedures until it is determined that existing emerging technologies meet their functional requirements in an operational environment. October 7, 2009 To help ensure that DHS’s S&T and TSA take a comprehensive, risk-informed approach to the RDT&E, procurement, and deployment of airport passenger checkpoint screening technologies, and to increase the likelihood of successful procurements and deployments of such technologies, in the restricted version of this report, we recommended that the Assistant Secretary for TSA should, in the future, prior to testing or using all checkpoint screening technologies at airports, determine whether TSA’s passenger screening procedures should be revised to require the use of appropriate screening procedures until the performance of the technologies has been validated through successful testing and evaluation. Category Conduct a test, study, or analysis and TSA take a comprehensive, risk-informed approach to the RDT&E, procurement, and deployment of airport passenger checkpoint screening technologies, and to increase the likelihood of successful procurements and deployments of such technologies, in the restricted version of this report, we recommended that the Assistant Secretary for TSA should evaluate the benefits of the Explosives Trace Portals that are being used in airports, and compare the benefits to the costs to operate and maintain this technology to determine whether it is cost-effective to continue to use the machines in airports. To help ensure that TSA takes a comprehensive and cost- effective approach to the procurement and deployment of EDSs that meet the 2010 EDS requirements and any subsequent revisions, the Assistant Secretary for TSA should develop a plan to ensure that screening devices or protocols are in place to resolve EDS alarms if EDSs are deployed that detect a broader set of explosives than existing ETD machines used to resolve EDS screening alarms. Recommendation To help ensure that TSA takes a comprehensive and cost- effective approach to the procurement and deployment of EDSs that meet the 2010 EDS requirements and any subsequent revisions, the Assistant Secretary for TSA should develop a plan to ensure that TSA has the explosives data needed for each of the planned phases of the 2010 EDS requirements before starting the procurement process for new EDSs or upgrades included in each applicable phase. To help ensure that TSA takes a comprehensive and cost- effective approach to the procurement and deployment of EDSs that meet the 2010 EDS requirements and any subsequent revisions, the Assistant Secretary for TSA should establish a process to communicate information to EDS vendors in a timely manner regarding TSA’s EDS acquisition, including information such as changes to the schedule. To help ensure that TSA takes a comprehensive and cost- effective approach to the procurement and deployment of EDSs that meet the 2010 EDS requirements and any subsequent revisions, the Assistant Secretary for TSA should develop and maintain an integrated master schedule for the entire Electronic Baggage Screening Program (EBSP) in accordance with the nine best practices identified by GAO for preparing a schedule. Recommendation To help ensure that TSA takes a comprehensive and cost- effective approach to the procurement and deployment of EDSs that meet the 2010 EDS requirements and any subsequent revisions, the Assistant Secretary for TSA should ensure that key elements of the program’s final cost estimate reflect critical issues, such as the potential cost impacts resulting from schedule slippage identified once an integrated master schedule for the Electronic Baggage Screening Program has been developed in accordance with the nine best practices identified by GAO for preparing a schedule. To help ensure that TSA takes a comprehensive and cost- effective approach to the procurement and deployment of EDSs that meet the 2010 EDS requirements and any subsequent revisions, the Assistant Secretary for TSA should develop a plan to deploy EDSs that meet the most recent EDS explosives- detection requirements and ensure that new machines, as well as machines deployed in airports, will be operated at the levels established in those requirements. This plan should include the estimated costs for new machines and upgrading deployed machines, and the time frames for procuring and deploying new machines and upgrading deployed machines. In order to strengthen the credibility, comprehensiveness, and reliability of TSA’s cost estimates and related savings estimates for the EBSP, the Administrator of TSA should ensure that its life cycle cost estimates conform to cost estimating best practices. Recommendation To help ensure TSA analyzes canine team data to identify program trends, and determines if PSC teams provide an added security benefit to the civil aviation system, and if so, deploys PSC teams to the highest-risk airports, we recommend that the Administrator of the Transportation Security Administration direct the Manager of the National Canine Program (NCP) to regularly analyze available data to identify program trends and areas that are working well and those in need of corrective action to guide program resources and activities. These analyses could include, but not be limited to, analyzing and documenting trends in proficiency training, canine utilization, results of short notice assessments (covert tests) and final canine responses, performance differences between law enforcement officer (LEO) and TSI canine teams, as well as an assessment of the optimum location and number of canine teams that should be deployed to secure the U.S. transportation system. Recommendation To help ensure TSA analyzes canine team data to identify program trends, and determines if passenger screening canine (PSC) teams provide an added security benefit to the civil aviation system, and if so, deploys PSC teams to the highest-risk airports, we recommend that the Administrator of TSA direct the Manager of the NCP to expand and complete testing, in conjunction with DHS S&T, to assess the effectiveness of PSCs and conventional canines in all airport areas deemed appropriate (i.e., in the sterile area, at the passenger checkpoint, and on the public side of the airport) prior to making additional PSC deployments to help (1) determine whether PSCs are effective at screening passengers, and resource expenditures for PSC training are warranted, and (2) inform decisions regarding the type of canine team to deploy and where to optimally deploy such teams within airports. Issue Date March 31, 2014 To help ensure that TSA improves Screening Officers’ (SO) performance on Advanced Imaging Technology systems equipped with Automated Targeted Recognition (AIT- ATR) and uses resources effectively, the Administrator of the Transportation Security Administration should establish protocols that facilitate the capturing of operational data on secondary screening of passengers at the checkpoint to determine the extent to which AIT-ATR system false alarm rates affect operational costs once AIT-ATR systems are networked together. March 31, 2014 To help ensure that TSA invests in screening technology that meets mission needs, the Administrator of TSA should measure system effectiveness based on the performance of the Advanced Imaging Technology 2 (AIT-2) technology and screening officers who operate the technology, while taking into account current processes and deployment strategies before procuring AIT-2 systems. March 31, 2014 To help ensure that TSA invests in screening technology that meets mission needs, the Administrator of TSA should use scientific evidence and information from DHS’s Science and Technology Directorate, and the national laboratories, as well as information and data provided by vendors to develop a realistic schedule with achievable milestones that outlines the technological advancements, estimated time, and resources needed to achieve TSA’s Tier IV end state before procuring AIT-2 systems. Recommendation To increase the likelihood of timely and successful acquisitions when enhancing advanced imaging technology (AIT) capabilities for airport passenger checkpoint screening, the Secretary of Homeland Security, in conjunction with the Administrator of TSA, should conduct a technical risk assessment to determine the extent to which AIT products need additional development to meet requirements. TSA should complete this assessment prior to award of production units and should seek an independent review from a knowledgeable party, such as the DHS Science and Technology Directorate. To increase the likelihood of timely and successful acquisitions when enhancing AIT capabilities for airport passenger checkpoint screening, the Secretary of Homeland Security, in conjunction with the Administrator of TSA, should ensure that information from technical risk assessments is used to inform all future iterations of Transportation Security Administration’s roadmap for enhancing AIT capabilities. In addition to the contact named above, Glenn G. Davis (Assistant Director), Nima Patel Edwards (Analyst-In-Charge), Rodney Bacigalupo, Eric D. Hauswirth, Richard B. Hung, Thomas Lombardi, Luis E. Rodriguez, Tovah Rom, Carley Shinault, and Edith Sohna made key contributions to this report. | Within DHS, TSA is the federal agency responsible for securing domestic transportation systems. The Transportation Security Acquisition Reform Act (TSARA) contains a provision for GAO to submit a report to Congress containing an assessment of TSA's implementation of GAO recommendations regarding the acquisition of security-related technology. This report addresses (1) the status of TSA's implementation of relevant GAO recommendations since 2003 and the characteristics of those recommendations and (2) benefits realized by TSA in implementing those recommendations. GAO determined the number and status of recommendations made to TSA from October 1, 2003, after TSA had become a part of the newly created DHS, through July 31, 2015, as well as the benefits derived from the recommendations TSA implemented, using an internal database that GAO maintains on the status of recommendations it makes. GAO specifically identified recommendations related to the acquisition of technology that helps TSA prevent or defend against threats to domestic transportation systems. TSA concurred with GAO's list of recommendations. TSA also provided technical comments on a draft of this report which GAO incorporated as appropriate. DHS did not provide formal comments. The Department of Homeland Security's (DHS) Transportation Security Administration (TSA) has implemented 51 of the 58 recommendations GAO made from October 1, 2003, through July 31, 2015, to improve TSA's acquisition of security-related technology. GAO's recommendations generally directed TSA to develop a plan; conduct an analysis; or implement a program, policy, or procedure. For example, in March 2014, GAO recommended that TSA establish protocols to capture operational data on secondary screening of passengers at the checkpoint, to help ensure screening officers' performance. TSA has not implemented 7 of the 58 recommendations. GAO closed 4 of the 7 recommendations because TSA stated that it would not take action or GAO determined that it was unlikely that TSA would take action. These recommendations were related to establishing the effectiveness of canine screening, conducting technical assessments to strengthen airport perimeter security and Advanced Imaging Technology. The remaining 3 open recommendations are focused on improving Advanced Imaging Technology operations. GAO continues to believe these recommendations are valid and should be fully addressed. Since fiscal year 2003, GAO has identified approximately $1.7 billion in financial benefits, largely representing funds that TSA used to support other programs and activities, based on implementation of GAO's recommendations as well as findings from related GAO reports and testimonies on security-related technology acquisitions. GAO has also documented additional benefits, including programmatic and process improvements to TSA's programs, such as improvements to TSA's Electronic Baggage Screening Program's cost estimating processes. |
H.R. 4401 would establish a Health Care Infrastructure Commission within the Department of Health and Human Services (HHS) to design, construct, and implement an immediate claim, administration, payment resolution, and data collection system that would initially be used by the Medicare part B program.This system would (1) immediately advise each provider and supplier of coverage determination; (2) immediately notify each provider and supplier of any incomplete or invalid claims, including the identification of missing data and coding errors; (3) immediately process clean claimsso that a provider or supplier may provide a written explanation of medical benefits, including costs and coverage to any beneficiary at the point of care; and (4) allow electronic payment of claims for which payment is not made on a periodic payment basis. The bill also calls for the commission to conduct and publicize a study, with final recommendations, on the design and construction of such a system within 3 years and establishes a timetable with specific performance measures for its initial, intermediate, and full implementation. Another key provision of H.R. 4401 that relates to the Medicare program is the elimination of section 1842(c)(3) of the Social Security Act (42 U.S.C. 1395u(c)(3)), which prohibits the payment of claims until after 13 calendar days from the date received if electronically submitted or until after 26 calendar days if manually submitted. In addition, H.R. 4401 would affect FEHBP—the federal government’s health benefits program for employees and retirees—which is run by the Office of Personnel Management (OPM). It would require OPM to adapt the immediate claim, administration, payment resolution, and data collection system for use by FEHBP and require FEHBP carriers to use that system. H.R. 4401 also sets a timetable with specific performance measures for initial, intermediate, and full implementation of the system. Although H.R. 4401 is explicit in that the proposed system would cover the Medicare part B program and FEHBP, it is unclear whether other federal health programs would also be included in this system. H.R. 4401 calls for the establishment of an advanced informational infrastructure for “ederal health benefits programs which consists of an immediate claim, administration, payment resolution, and data collection system . . . that is initially for use by carriers to process claims submitted by providers and suppliers under part B of the edicare program . . . .” (In a later section, the bill requires that this system be applied to FEHBP.) The bill does not define “federal health benefits programs,” and provides for inclusion of only Medicare part B and FEHBP in the system. However, if in the future the proposed system is intended to include other federal health benefits programs such as Medicare part A, Medicaid, veterans’ health services, the Department of Defense’s health services, and Indian health services, development and implementation of the system envisioned by the bill would be different and much more challenging. These other federal health programs are markedly different. In some cases, the federal government acts like other large employers that contract with insurance companies and health plans to offer health benefits to employees and their dependents. In other cases, it acts like a large insurance company that pays directly for health care services. In still other instances, it acts like a large staff-model health maintenance organizationthat operates a network of hospitals and employs health care professionals. Accordingly, if the proposed real-time claims processing system were to later be intended to address the claims processing requirements of any of these programs, it would have a significant impact on the system’s design and complexity. Administered by HHS’ Health Care Financing Administration (HCFA), Medicare is the nation’s largest health insurer, covering almost 40 million beneficiaries at a cost of over $200 billion annually. Medicare operates through a complicated administrative structure. Its authorizing legislation—title XVIII of the Social Security Act—required HCFA to contract with the private sector for claims processing and payment functions. This requirement has led to a large contractor network comprised of insurance companies responsible for processing Medicare claims in given states. These Medicare contractors are responsible for claims processing and administration, including (1) receiving claims; (2) judging their appropriateness; (3) paying appropriate ones promptly; (4) identifying potentially fraudulent claims or providers, and withholding payment, if necessary; and (5) recovering overpayments or inappropriate payments. Contractors develop a set of criteria to determine which claims to pay, guided by laws, regulations, the Medicare policy manuals, and periodic agency directives. For the Medicare part B program, HCFA uses 22 companies doing business as carriers to process claims. Each carrier relies on one of four standard systems to process its claims, adding its own front-end and back-end processing systems. These systems interface with the common working file (CWF)—a set of nine databases containing beneficiary information for specific geographic regions—to authorize claims payments and determine beneficiary eligibility. The CWF obtains information, such as beneficiary enrollment data, from HCFA’s internal systems. Contractors pay approved claims by check or by electronic funds transfers. Each day, contractors’ banks draw money from the Federal Reserve System sufficient to cover the provider checks and electronic funds transfers expected to clear the bank during the next business day. Figure 1 provides an overview of the Medicare fee-for-service claims process for the part B program. In fiscal year 1999, about 81 percent of part B claims that were completed were submitted electronically by providers or billing services, which use one of two standard electronic formats. As illustrated in figure 2, once claims are submitted, carriers and HCFA use a variety of automated edits to determine the validity of these claims. Carriers generally use three types of edits before authorizing the payment of a claim. First, front-end edits are used to ensure that valid values are used and appropriate fields are completed. Claims that fail the front-end edits are rejected and returned to the provider. Second, carriers use utilization/medical policy edits to check claims against the medical- necessity criteria in medical policies. Utilization/medical policy edits are particularly important because Medicare pays providers a fee for covered medical services, which are identified through a complex, three-level coding system, the HCFA Common Procedure Coding System. Using these codes, utilization/medical policy edits flag indicators such as whether the medical diagnosis was appropriate for the patient’s gender or age or whether the medical procedure exceeded the threshold allowed during a given year. These edits can result in (1) a claim passing to the next set of edits, (2) a claim denial, (3) a claim being suspended until a manual review by claims examiners (who may request additional documentation) is conducted, or (4) a claim adjustment. The third type of carrier edits check for other payers, which are other primary sources of payment, such as employer-sponsored insurance or third-party liability settlements. If another potential payer is identified, the claim is generally denied. Once a claim passes the carrier edits, the claim is checked against one of the nine CWFs that are processed at seven different computer sites around the country. The CWF edits check for items such as beneficiary eligibility, deductibles and limits, and duplicate claims. These edits can result in (1) an authorized claim, (2) a claim returned to the carrier for further review, or (3) a claim adjustment. The CWF also checks for other payers and, if found, the claim is returned to the carrier for further review. One outcome of developing an immediate claim, administration, payment resolution, and data collection system would be faster Medicare part B claims payments. However, most Medicare claims could be paid more quickly using current processes by simply eliminating the mandatory delay in paying claims. Specifically, by enacting the section of the bill that eliminates the mandatory claims payments delay until after 13 calendar days from the date of electronic submission (26 calendar days if submitted manually), the mean time to pay claims would likely be substantially reduced. The mean time for processing and paying a clean part B claimthat required minimal or no manual intervention was 17.3 days in fiscal year 1999 (14.5 days for electronic submissions). However, HCFA estimates that carriers process almost two-thirds of all claims within 5 days.Once processed and authorized for payment, the claims are held until the next payment cycle after the 13- or 26-day requirement has been met (carriers generally make payments every work day). The carrier then issues a check or authorizes an electronic funds transfer to pay the claim. One drawback to eliminating the mandatory payment delay is that the Supplementary Medical Insurance trust fund, from which the Medicare part B program is funded, would lose some of the interest it earns on its balance if payments were made more quickly. Under HCFA’s current claims processing environment, we estimate that the trust fund could lose as much as about $140 million in interest revenue annually if the mandatory payment delay were removed. This amount assumes (1) annual part B outlays of $60 billion, (2) that the average time to pay claims would drop from 17.3 days to 5 days, and (3) an average interest rate of about 7 percent on securities.The amount the trust fund could lose may be even higher if a real-time claims processing system were implemented because the average time to pay a claim could drop below 5 days. The Medicare Supplementary Medical Insurance trust fund is financed by payments from federal government general revenues and by monthly premiums charged beneficiaries. Consequently, a decrease in interest earnings could prompt the need for additional appropriations or increases in beneficiaries’ premiums to compensate for the interest that the trust fund would otherwise have earned. While the development of an immediate claim, administration, payment resolution, and data collection system to be used by the Medicare part B program might be feasible, it would significantly change the government’s current processes because it would require the real-time processing of certain elements of the claims process that are currently performed in batch mode or manually.In the abstract, a real-time Medicare part B claims process could be achievable if appropriate systems development policies and techniques are used. Although more beneficiaries might have to pay their copayments immediately, it could provide health care providers and beneficiaries with several benefits—primarily the immediate notification of approved or denied claims. However, without appropriate safeguards, a real-time claims processing system could involve serious risks because it opens the process to a possible rise in the number of improper Medicare payments.In addition, the technical and cost risks associated with developing a real-time claims processing system could be considerable. We have long identified Medicare as a high-risk program that is vulnerable to fraud, abuse, and payment errors.Many of Medicare’s vulnerabilities stem from its size and decentralized administrative structure, which make it a perpetually attractive target for exploitation and make payment errors more likely. Because wrongdoers are continually finding new ways to dodge program safeguards, HCFA and its contractors periodically revise their pre-payment edit and post-payment audit routines. As a result, the proposed real-time claims processing system must include appropriate internal controls to help ensure that operational problems are minimized and program integrity protected. Key to the design of appropriate controls is the effective assessment of both external and internal risks that an agency faces in achieving its objectives, as well as determining how risks should be minimized. A major internal control challenge that a real-time claims processing system would have to overcome is ensuring that prepayment processes currently performed manually are adequately addressed. Any new real- time claims process applied to all claims would have to find a way to accommodate existing manual processes (e.g., postpone until after claims payment or provide tentative claims approval in certain circumstances), such as in the case of claims examiners’ reviews of claims that are suspended because they did not pass utilization/medical policy edits or in cases that involved claims in which Medicare should be the secondary, rather than primary, payer. This latter issue is particularly problematic because determining another insurer’s liability can be a time-consuming process of discovering whether insurance coverage overlaps and, if so, ascertaining Medicare’s liability. If issues such as these are not adequately addressed, additional improper Medicare payments can result. It is also essential that current program safeguards, such as the edit process illustrated in figure 2, not be compromised. The utilization/medical policy edits that address the often complex art of coding claims are a particular area of concern. As previously mentioned, HCFA’s Common Procedure Coding System uses three levels of codes: Level 1, the American Medical Association’s Physicians’ Current Procedural Terminology, consists of a list of 5-digit codes for most of the services performed by physicians. These codes are used to bill for most procedures and services but have limited selections for describing supplies, materials, and injections. Level 2 are national codes that supplement the level 1 codes and are used to bill for a range of services and supplies such as vision services and surgical supplies. These codes have a uniform description nationwide, but due to what is known as “carrier discretion,” their processing and reimbursement are not necessarily uniform. Level 3 are local codes developed by individual Medicare carriers. The codes are often used to describe new services, supplies, and materials, as well as to report procedures and services that have been deleted from Current Procedural Terminology codes but are still recognized and reimbursed by the carrier. The Medicare coding system is difficult to use because it (1) attempts to identify codes for all accepted medical procedures, including codes to describe minor procedures that are components of more comprehensive procedures, and (2) changes every year. For example, the fee for surgery often includes the cost of related services for the global service period, that is, for a set number of days before and after the surgery. To prevent overpayment in these cases, Medicare carriers need to identify when claims for surgery include codes that represent related services and reduce the payment accordingly. These complexities can inadvertently lead providers to submit improperly coded claims. They also make the Medicare program vulnerable to abuse from providers or billing services that attempt to maximize reimbursement by intentionally submitting claims containing inappropriate combinations of codes. Because a real-time claims processing system can be particularly vulnerable to code manipulation (e.g., through repeated submission of fraudulent claims until they pass the system’s edits), it would be prudent to exclude problem providers from participating in a real-time system and require that new providers complete a probationary period before they become eligible to participate. In another situation—agency “fast pay” initiatives (when payment authorization is made prior to verifying receipt and acceptance of goods or services)—we have similarly stated that agencies should limit its use to those cases in which suppliers have had and continue to have good ongoing business relationships with the agency.While the system proposed by H.R. 4401 is not a “fast pay” situation, it would be prudent to employ these same controls since Medicare has areas in which mispayment and fraud have been particular problems. For example, medical equipment supply is an area vulnerable to fraud, as indicated by its the high payment error rate. Indeed, according to fiscal year 1997 and 1998 Department of Justice reports, a few medical equipment suppliers were able to enroll in the Medicare program and obtain millions of dollars in fraudulent payments before post-payment reviews and utilization analyses were able to identify the fraudulent activity. Further, ensuring that adequate documentation controls (e.g., detailed history files and/or logs) are in place and enforced to ensure that the electronic trail is not lost or tampered with would be particularly important in a Medicare real-time processing environment. The importance of maintaining detailed Medicare payment histories and medical records is demonstrated by the results of HHS’ Office of the Inspector General’s fiscal year 1999 claims review. The Office of the Inspector General found that claim payment histories and provider medical records were essential to identifying the payment errors it found. In addition to the Medicare part B improper payment implications of H.R. 4401, other considerations to be taken into account are the technical and cost risks associated with the development and implementation of a real- time claims processing system. The Clinger-Cohen Act requires agency heads to design and implement a process for maximizing the value and assessing and minimizing the risks of information technology acquisitions. Guidance prepared by the Office of Management and Budget and by us on how to implement such a process calls on agencies to assess projects’ benefits, costs, and risks.Items to consider before undertaking an information technology project include the project’s return on investment, its link to the business’ objectives or strategic plan, and evidence of compliance with the organization’s overall systems architecture. Without such analyses, it is risky to require that this system be implemented. Response times, which can be slowed by the amount and type of telecommunications involved and the complexity of processing, are a critical factor in the success of real-time systems. An example of a systems development that failed, in part due to a response time problem, is the Bureau of Land Management’s Automated Land and Mineral Record System Initial Operating Capability. As we testified in March 1999, during an operational assessment test and evaluation, users reported that system response time problems were severe or catastrophic at all test sites.Because of this and other problems and after obligating over $67 million, the Bureau of Land Management decided that the Initial Operating Capability was not deployable. While a high-quality system design would reduce the risk of slow response times, hundreds of thousands of providers could be submitting millions of transactions daily (carriers completed action on almost 718 million Medicare part B claims in fiscal year 1999). Moreover, it is critical that system controls (such as the many and varied edits previously discussed) not be compromised in an effort to achieve reasonable response times. Security, already a major concern in the Medicare program, must also be adequately addressed in any proposed real-time claims processing system. H.R. 4401 requires that the real-time claims processing system include strict security measures that guard system integrity, including protecting the privacy of patients and the confidentiality of personally identifiable health insurance data. Implementing such requirements, however, is not easy. Both HHS’ Office of the Inspector Generaland wehave reported that HCFA’s computer controls do not effectively prevent unauthorized access to, and disclosure of, sensitive Medicare information. This problem could be compounded if appropriate security controls are not designed into the proposed system. In particular, without appropriate controls, electronic connections can provide a path that can be used by hackers and others to gain access to databases that contain sensitive information or to simply disrupt operations. Recent experiences with the Melissa and “ILOVEYOU” computer viruses demonstrate the formidable challenge the federal government faces in protecting its information technology assets and sensitive data.Although key government services remained largely operational, these viruses were disruptive and provided evidence that computer attack tools and techniques are becoming increasingly sophisticated. Moreover, if the design for the real-time claims processing system includes a World Wide Web-based system, the possibility of other types of attacks must also be considered and addressed. For example, a “denial-of-service” attack (e.g., a web site is flooded with fake requests for pages) can make it difficult or even impossible for legitimate customers to access a web site or cause the targeted system to crash.Computer attacks are also a cause for broader information security concerns across government because of the inability to detect, protect against, and recover from computer attacks; inadequately segregated duties, which increase the risk that people can take unauthorized actions without detection; and weak configuration management processes. Because Medicare part B and FEHBP are substantially different programs, it would be difficult to design and implement a single system to process claims under both programs, as called for by H.R. 4401. Specifically, H.R. 4401 requires that (1) OPM adapt the immediate claim, administration, payment resolution, and data collection system for use by the FEHBP and (2) carriers participating in FEHBP use the system to satisfy certain minimum requirements for claim submission, processing, and payment. Under FEHBP, the government contracts with private plans to finance or provide care to federal workers and retirees for negotiated annual premiums. The government runs no plans, pays no claims, and its financial obligations are limited to its share of the cost of the private plan premiums and certain administrative costs. For 2000, federal employees could select from seven nationwide fee-for-service plans,six fee-for-service plans open to specific groups, and hundreds of health maintenance organization plans available throughout the nation. As we explained in August 1998, Medicare and FEHBP are significantly different.For example, HCFA and its carriers authorize claims payments and monitor abuse or fraud, while these roles are delegated to the hundreds of health plans that are enrolled under FEHBP.In addition, traditional Medicare covers the same standard package of services and requires the same deductibles, coinsurance, and copayment requirements for all beneficiaries. In contrast, FEHBP does not require participating plans to cover a standard or core benefits package. Although all plans offer inpatient hospital and outpatient medical coverage as well as certain OPM-required services, specific benefits vary. These differences would make it challenging and costly to design and implement a real-time claims processing system for both programs. Moreover, FEHBP carriers may balk at being forced to implement a system that was not developed with their particular systems and processes in mind, and it could cause them to drop out of the program. The implications of having a real-time claims processing system that would initially be used by Medicare part B carriers and be developed and implemented by the seven-member Health Care Infrastructure Commission instead of HCFA should be carefully considered.Specifically, the bill charges the commission, which does not include HCFA, with designing, constructing, and implementing a real-time claims processing system. Adding another organization to the already complicated Medicare process would compound the project’s complexity. Moreover, any system related to processing Medicare part B claims would greatly affect HCFA’s current systems as well as its future systems development. Further, the bill is silent on whether the commission would also be responsible for maintaining the system, which raises additional uncertainties about the commission’s and HCFA’s respective roles. The commission could elect to use HCFA for the development, implementation, and maintenance of the system. In such a case, if a real- time claims processing system is to be developed, it may be more fitting for the proposed commission to oversee HCFA’s actions, rather than develop and implement the system itself. Such oversight could include evaluating the system design and monitoring HCFA’s development and implementation actions. Aside from its role, the composition of the commission also needs to be carefully considered. In particular, having health care and financial management expertise on the commission would be critical. As currently conceived, though, the commission includes several officials from federal agencies with expertise in advanced information technology but not health care or financial management. Specifically, the bill explicitly calls for each official appointed to the commission to “be an expert in advanced information technology” but does not address health care or financial management expertise. If a real-time claims processing system is to be developed, as envisioned by the bill, consideration should be given to including key HCFA and carrier officials with health care claims processing, program integrity, and financial management expertise on the commission. One reason it is important for HCFA and its contractors to be part of the commission is that the development of a real-time claims processing system could overlap—and possibly conflict with—ongoing and planned HCFA initiatives, which could be costly and disruptive to both efforts. For example, HCFA plans to transition from four to two standard Medicare part B systems (one is only for durable medical equipment carriers) by fiscal year 2003. Initiatives such as this would clearly affect, and be affected by, a real-time claims processing system. Other entities that should be considered for membership in the commission if the real-time claims processing system set out in the bill is to be developed are OPM and providers. A representative from OPM should be considered as a member of the commission since, as currently called for in the bill, any system developed would be applied to the FEHBP. Moreover, it may be desirable to have a representative from the provider community on the commission, since a real-time claims processing system would also significantly affect providers. A past HCFA system development failure could provide valuable lessons in the type of approach that could be taken to determine whether a cost- effective, real-time claims processing system can be built. In the mid-1990s HCFA attempted to improve the efficiency and effectiveness of its Medicare operations by developing one unified computer system—the Medicare Transaction System (MTS)—to replace its existing standard systems. This single system would have integrated data from Medicare part A and part B and managed care and provided a comprehensive view of billing practices. As we previously reported, the MTS project encountered problems from the very beginning.It was plagued with schedule delays, cost overruns, and the lack of effective management and oversight. Ultimately, in August 1997, HCFA terminated the MTS contract on which it had spent over 3 years and about $80 million. Although about $50 million of this amount was for software development (the other $30 million went to internal HCFA costs), this failed project did not produce integrated claims processing software. As we testified in September 1997, MTS provided HCFA with a huge learning experience about the difficulty of acquiring such a large system under a single contract and a better understanding of the requirements for developing a Medicare claims processing system. The learning experience HCFA gained from MTS can provide lessons for the proposed real-time claims processing system. In particular, as we reported in May 1997, MTS was not adequately managed as an investment.HCFA had not followed practices that are essential if management is to make informed information technology decisions. Such practices include preparing a valid cost-benefit analysis, considering viable alternatives and assessing risks, and evaluating how the proposed technology will contribute to improvements in mission performance. While H.R. 4401 requires the commission to perform a study on the design and construction of the proposed real-time claims processing system, the bill does not require that analyses such as these be performed, which can reduce risks and help ensure that information technology projects achieve maximum return on investment. Accordingly, the proposed system could benefit from the completion of investment management analyses before a decision is made about whether the system should be implemented. These analyses could determine whether cost-effective ways to address the issues that we have outlined exist. Another lesson that can be learned from the MTS project is that a phased approach can reduce the financial, schedule, and technical risks of a project. The original MTS schedule was developed on the basis of a grand design approach, in which the complete system would be implemented at one time.A phased approach can reduce the risks inherent in any large computer development effort—cost overruns, schedule delays, and the system’s failure to perform as expected. Accordingly, it might also be desirable to take a phased approach to the proposed real-time claims processing system, which could reduce its risks. In summary, H.R. 4401 has worthwhile objectives and would offer benefits to providers and beneficiaries in that decisions on authorized and denied claims would be provided immediately. Nevertheless, Medicare part B claims could be paid more quickly using HCFA’s current processes without such a system. Paying claims faster, however, may not be desirable because Medicare’s Supplementary Medical Insurance trust fund would lose interest revenue. Before an implementation decision is made, it is particularly important to demonstrate that a real-time claims processing system can be designed that provides the safeguards necessary to minimize improper payments. Moreover, because of the complexity of the Medicare process, additional analyses of the technical and cost risks of a real-time claims processing system would be prudent before requiring that it be developed and implemented. In addition, the administrative and benefits differences between Medicare and FEHBP would make the development and implementation of a system applicable to both programs difficult. Further, the role and makeup of the commission should be carefully considered to help ensure that any such system would take into account the current Medicare environment, as well as health care and financial management issues. Finally, lessons learned in HCFA’s MTS failure demonstrate that it is important that critical analyses be performed before implementation decisions are made. Accordingly, it may be premature to require implementation of the system envisioned by the bill until such analyses are completed. Mr. Chairman, this concludes our statement on H.R. 4401. We have also provided additional technical comments on the bill to your staff. We would be pleased to respond to any questions that you or other members of the Subcommittee may have at this time. For information about this testimony, please contact Joel Willemssen at (202) 512-6253 or by e-mail at willemssenj.aimd@gao.govor Gloria Jarmon at (202) 512-4476 or by e-mail at jarmong.aimd@gao.gov. Individuals making key contributions to this testimony included Naba Barkakati, Kay Daly, Michael Fruitman, Donald Hunts, Linda Lambert, Wayne Marsh, and Margaret Mills. (511858/916364) | Pursuant to a congressional request, GAO discussed the Health Care Infrastructure Investment Act of 2000 (H.R. 4401), which calls for the development of an immediate claim, administration, payment resolution, and data collection system, focusing on the: (1) effects of the system on the claims process of both the Medicare part B program and the Federal Employees Health Benefits Program (FEHBP); and (2) the role and composition of a proposed Health Care Infrastructure Commission. GAO noted that: (1) H.R. 4401 would establish an Infrastructure Commission within the Department of Health and Human Services to design, construct, and implement an immediate claim, administration, payment resolution, and data collection system that would initially be used by the Medicare part B program; (2) this system would: (a) immediately notify each provider and supplier of coverage determination; (b) immediately notify each provider and supplier of any incomplete or invalid claims, including the identification of missing data and coding errors; (c) immediately process clean claims so that a provider or supplier may provide a written explanation of medical benefits, including costs and coverage to any beneficiary at the point of care; and (d) allow electronic payment of claims for which payment is not made on a periodic payment basis; (3) one outcome of developing an immediate claim, administration, payment resolution, and data collection system would be faster Medicare part B claims payments; (4) while the development of an immediate claim, administration, payment, resolution, and data collection system to be used by the Medicare part B program might be feasible, it would significantly change the government's current processes because it would require the real-time processing of certain elements of the claims process that are performed in batch mode or manually; (5) H.R. 4401 would also affect FEHBP, which is run by the Office of Personnel Management (OPM); (6) H.R. 4401 requires that: (a) OPM adapt the immediate claim, administration, payment resolution, and data collection system for use by the FEHBP; and (b) carriers participating in FEHBP use the system to satisfy certain minimum requirements for claim submission, processing, and payment; (7) because Medicare part B and FEHBP are substantially different programs, it would be difficult to design and implement a single system to process claims under both programs, as called for by H.R. 4401; (8) although all health plans offer inpatient hospital and outpatient medical coverage as well as certain OPM-required services, specific benefits vary; (9) these differences would make it challenging and costly to design and implement a real-time claims processing system for both programs; and (10) if a real-time claims processing system is to be developed, consideration should be given to including key Health Care Financing Administration (HCFA) and carrier officials with health care claims processing, program integrity, and financial management expertise on the Infrastructure Commission, as well as OPM and providers, since the system would affect HCFA, OPM, and the providers. |
TSA is responsible for securing all modes of transportation while facilitating commerce and the freedom of movement for the traveling public. Passenger prescreening is one program among many that TSA uses to secure the domestic aviation sector. The process of prescreening passengers—that is, determining whether airline passengers might pose a security risk before they reach the passenger-screening checkpoint—is used to focus security efforts on those passengers that represent the greatest potential threat. Currently, U.S. air carriers conduct passenger prescreening by comparing passenger names against government-supplied terrorist watch lists and applying the Computer-Assisted Passenger Prescreening System rules, known as CAPPS rules. Following the events of September 11, and in accordance with the requirement set forth in the Aviation and Transportation Security Act that a computer-assisted passenger prescreening system be used to evaluate all passengers before they board an aircraft, TSA established the Office of National Risk Assessment to develop and maintain a capability to prescreen passengers in an effort to protect U.S. transportation systems and the public against potential terrorists. In March 2003, this office began developing the second-generation computer-assisted passenger prescreening system, known as CAPPS II, to provide improvements over the current prescreening process, and to screen all passengers flying into, out of, and within the United States. Based in part on concerns about privacy and other issues expressed by us and others, DHS canceled the development of CAPPS II in August 2004 and shortly thereafter announced that it planned to develop a new passenger prescreening program called Secure Flight. In contrast to CAPPS II, Secure Flight, among other changes, will only prescreen passengers flying domestically within the United States, rather than passengers flying into and out of the United States. Also, the CAPPS rules will not be implemented as part of Secure Flight, but rather the rules will continue to be applied by commercial air carriers. Secure Flight will operate on the Transportation Vetting Platform (TVP)—the underlying infrastructure (hardware and software) to support the Secure Flight application, including security, communications, and data management; and, the Secure Flight application is to perform the functions associated with receiving, vetting, and returning requests related to the determination of whether passengers are on government watch lists. This application is also to be configurable—meaning that it can be quickly adjusted to reflect changes to workflow parameters. Aspects of Secure Flight are currently undergoing development and testing, and policy decisions regarding the operations of the program have not been finalized. As currently envisioned, under Secure Flight, when a passenger makes flight arrangements, the organization accepting the reservation, such as the air carrier’s reservation office or a travel agent, will enter passenger name record (PNR) information obtained from the passenger, which will then be stored in the air carrier’s reservation system. While the government will be asking for only portions of the PNR, the PNR data can include the passenger’s name, phone number, number of bags, seat number, and form of payment, among other information. Approximately 72 hours prior to the flight, portions of the passenger data contained in the PNR will be sent to Secure Flight through a network connection provided by DHS’s CBP. Reservations or changes to reservations that are made less than 72 hours prior to flight time will be sent immediately to TSA through CBP. Upon receipt of passenger data, TSA plans to process the passenger data through the Secure Flight application running on the TVP. During this process, Secure Flight is to determine if the passenger data match the data extracted daily from TSC’s Terrorist Screening Database (TSDB)—the information consolidated by TSC from terrorist watch lists to provide government screeners with a unified set of terrorist-related information. In addition, TSA will screen against its own watch list composed of individuals who do not have a nexus to terrorism but who may pose a threat to aviation security. In order to match passenger data to information contained in the TSDB, TSC plans to provide TSA with an extract of the TSDB for use in Secure Flight, and provide updates as they occur. This TSDB subset will include all individuals classified as either selectees (individuals who are selected for additional security measures prior to boarding an aircraft) or no-flys (individuals who will be denied boarding unless they are cleared by law enforcement personnel). To perform the match, Secure Flight is to compare the passenger, TSDB, and other watch list data using automated name-matching technologies. When a possible match is generated, TSA and potentially TSC analysts will conduct a manual review comparing additional law enforcement and other government information with passenger data to determine if the person can be ruled out as a possible match. TSA is to return the matching results to the air carriers through CBP. Figure 1 illustrates how Secure Flight is intended to operate. As shown in figure 1, when the passenger checks in for the flight at the airport, the passenger is to receive a level of screening based on his or her designated category. A cleared passenger is to be provided a boarding pass and allowed to proceed to the screening checkpoint in the normal manner. A selectee passenger is to receive additional security scrutiny at the screening checkpoint. A no-fly passenger will not be issued a boarding pass. Instead, appropriate law enforcement agencies will be notified. Law enforcement officials will determine whether the individual will be allowed to proceed through the screening checkpoint or if other actions are warranted, such as additional questioning of the passenger or taking the passenger into custody. TSA has not followed a disciplined life cycle approach in developing Secure Flight, in accordance with best practices for large-scale information technology programs. Following a disciplined life cycle, activities and related documentation are to be developed in a logical sequence. TSA also has not finalized and documented functional and system requirements that fully link to each other and to source documents. Without adequately defined requirements, TSA cannot finalize a system security plan or develop a reliable program schedule or life cycle cost estimates. In addition to these concerns, other reviews that have been conducted of Secure Flight have raised questions about the management of the program. Based on evaluations of major federal information technology programs like Secure Flight, and research by others, following a disciplined life cycle management process in which key activities and phases of the project are conducted in a logical and orderly process and are fully documented, helps ensure that programs achieve intended goals within acceptable levels of cost and risk. Such a life cycle process begins with initial concept definition and continues through requirements determination to final testing, implementation, and maintenance. TSA has established a System Development Life Cycle (SDLC) that defines a series of orderly phases and associated steps and documentation. The SDLC serves as the mechanism to ensure that systems are effectively managed and overseen. Figure 2 provides a description of TSA’s SDLC phases and related documentation. TSA has not followed its SDLC in developing and managing Secure Flight. Rather, program officials stated that they have used a rapid development method that was intended to enable them to develop the program more quickly. However, these officials could not provide us with details on how this approach was implemented. As a result, our analysis of steps performed and documentation developed indicates that Secure Flight has not been pursued within the context of a logical, disciplined, system development methodology. Rather the process has been ad hoc, with project activities conducted out of sequence. For example, program officials declared that the program’s design phase was completed before system requirements had been adequately detailed, and key activities have yet to be adequately performed, such as program planning and defining system requirements. TSA officials acknowledged that problems arose with Secure Flight as a result of using this approach. As a result, it is currently unclear what Secure Flight capabilities are to be developed, by when, at what cost, and what benefits are to accrue from the program. Without clarification on these decision points, the program is at risk of failure. Defining and documenting system requirements is integral to life cycle development. Based on best practices and our prior work in this area, the expected capabilities of a system such as Secure Flight should be defined in terms of requirements for functionality (what the system is to do), performance (how well the system is to execute functions), data (what data are needed by what functions, when, and in what form), interface (what interactions with related and dependent systems are needed), and security. Further, system requirements should be unambiguous, consistent with one another, linked (that is, traceable from one source level to another), verifiable, understood by stakeholders, and fully documented. TSA has prepared certain Secure Flight requirements documents, and officials stated that they are now reviewing those requirements documents. We support these review efforts because we found, in the requirements documents we reviewed, inconsistencies and ambiguities in requirements documentation for system functions, performance, data, and security—and that these documents were not always complete. For example, according to TSA’s SDLC guidance and best practices for developing information technology systems, systems like Secure Flight should have a comprehensive concept of operations covering all aspects of the program during the planning phase (see fig. 2). We reported in our March 2005 report that TSA had not yet finalized a concept of operations, which would describe conceptually the full range of Secure Flight operations and interfaces with other systems, and we recommended that it develop one. Since March 2005, TSA documents refer to numerous concept of operations, such as a long concept of operations, a short concept of operations, and an initial operational capability concept of operations. TSA provided a June 2005 concept of operations for our review, but this document does not contain key system requirements, such as the high-level requirements for security and privacy. In addition, we found that Secure Flight requirements were unclear or missing. For example, while the requirements that we reviewed state that the system be available 99 percent of the time, this only covers the TVP and Secure Flight application. It does not include requirements for the interfacing systems critical for Secure Flight operations. Thus, the availability requirements for all of the components of the Secure Flight system are not yet known. Some data requirements are also vague or incomplete; for example, one data requirement is that the data is current, but the meaning of current is not defined. In addition, only some system security requirements are identified in the security document provided to us for the TVP, and sections in TSA’s Systems Requirements Specification contain only placeholder notes—“to be finalized”—for security and privacy requirements. TSA officials acknowledged that it is important that requirements be traceable to ensure that they are consistently, completely, and correctly defined, implemented, and tested. To help accomplish this, TSA officials stated that they use a requirements tracking tool for Secure Flight that can align related requirements to different documents, and thus establish traceability (e.g., it can map the Systems Requirements Specification to a functional requirements document). According to program officials, this tool can also be used for aligning and tracing requirements to test cases (i.e., scenarios used to determine that the system is working as intended). We found, however, that requirements for Secure Flight have not been fully traced. For example, we were not able to trace system capabilities in contractual documents to the concept of operations and then to the various requirement documents, to design phase use cases, and to test cases. In addition, contractor staff we interviewed stated that they were unable to use this tool to align or trace necessary requirements without the aid of supplemental information. Without internal alignment among system documentation relating to requirements, there is not adequate assurance that the system produced will perform as intended. In addition, we found that available Secure Flight requirements documents did not define the system’s boundaries, including interfaces, for each of the stakeholders—that is, the scope of the system from end to end, from an air carrier to CBP, to TSA, to TSC, and back to TSA, then again to CBP and air carriers (refer to fig. 1 for an overview of this process). Defining a system’s boundaries is important in ensuring that system requirements reflect all of the processes that must be executed to achieve a system’s intended purpose. According to TSA’s SDLC guidance, a System Boundary Document is to be developed early in the system life cycle. However, in its third year of developing a passenger prescreening system, TSA has not yet prepared such a document. Although the System Boundary Document was not available, the program’s Systems Security Document does refer to an “accreditation boundary,” which defines the Secure Flight system from the standpoint of system security accreditation and certification. According to this definition of what Secure Flight includes, those systems that are needed to accomplish Secure Flight program goals (e.g., those of commercial air carriers, CBP, and TSC) are not part of Secure Flight. If the boundary documents, and thus the requirements, do not reflect all system processes and connections that need to be performed, the risk is increased that the system will not achieve Secure Flight’s intended purpose. Moreover, until all system requirements have been defined, TSA will not be able to stress-test Secure Flight in an operational, end-to-end mode. In our March 2005 report, we recommended that TSA finalize its system requirements documents and ensure that these documents address all system functionality. Although TSA agreed with our recommendations, the requirements documentation that we reviewed showed that the agency has not yet completed these activities. Our evaluations of major federal information technology programs, and research by others, has shown that following a disciplined life cycle management process decreases the risks associated with acquiring systems. The steps and products in the life cycle process each have important purposes, and they have inherent dependencies among themselves. Thus, if earlier steps and products are omitted or deficient, later steps and products will be affected, resulting in costly and time- consuming rework. For example, a system can be effectively tested to determine whether it meets requirements only if these requirements have already been fully defined. Concurrent, incomplete, and omitted activities in life cycle management exacerbate the program risks. Life cycle management weaknesses become even more critical as the program continues, because the size and complexity of the program will likely only increase, and the later problems are found, the harder and more costly they will likely be to fix. In October 2005, Secure Flight’s director of development stated in a memorandum to the assistant TSA administrator responsible for Secure Flight that by not following a disciplined life cycle approach, in order to expedite the delivery of Secure Flight, the government had taken a calculated risk during the requirements definition, design, and development phases of the program’s life cycle development. The director stated that by prioritizing delivery of the system by a specified date in lieu of delivering complete documentation, TSA had to lower its standards of what constituted acceptable engineering processes and documentation. Since then, TSA officials stated that the required system documentation associated with each phase of the TSA life cycle is now being developed to catch up with development efforts. In addition, TSA recognized that it faces challenges preparing required systems documentation, and to help in this regard it has recently hired a certified systems program manager to manage systems development. In January 2006, this program manager stated that as Secure Flight moves forward, TSA’s SDLC would be followed in order to instill greater rigor and discipline into the system’s development. In addition, TSA plans to hire a dedicated program director for Secure Flight to manage program activities, schedules, milestones, costs, and program contractors, among other things. TSA has taken steps to implement an information system security management program for protecting Secure Flight information and assets. Secure Flight’s security plans and the related security review, which TSA developed and conducted to establish authority to operate, are important steps in the system’s development. However, the steps related to system security TSA has taken to date are individually incomplete, and collectively fall short of a comprehensive system security management program. Federal guidance and industry best practices describe critical elements of a comprehensive information system security management program. Without effective system security management, it is unlikely that Secure Flight will, for example, be adequately protected against unauthorized access and use, disruption, modification, and destruction. According to National Institute of Standards and Technology (NIST) and Office of Management and Budget (OMB) guidance under the Federal Information Security Management Act, as well as industry best practices, a comprehensive system security management program includes (1) conducting a system wide risk assessment that is based on system threats and vulnerabilities, (2) developing system security requirements and related policies and procedures that govern the operation and use of the system and address identified risks, (3) certifying that the system is secure based on sufficient review and testing to demonstrate that the system meets security requirements, and (4) accrediting the system as secure in an operational setting. TSA has developed two system security plans—one for the TVP and one for the Secure Flight application. However, neither of these plans nor the security activities that TSA has conducted to date are complete. For example, while security threats and vulnerabilities were assessed in the documentation and risks were identified in risk assessments, requirements to address these risks were only partially defined in the security plan for the TVP, and they were not included at all in the plan for the Secure Flight application. In addition, the sections on security requirements and privacy requirements in the System Requirements Specification document read “to be finalized” with no further description. Moreover, we also found that the security systems plans did not reflect the current level of risk designated for the program. For example, although the July 15, 2005, System Security Plan for the TVP arrived at an overall assessment of its exposure to risks as being “medium,” an August 23, 2005, requirements document found that the security risk level for the TVP was “high.” As a system moves from a medium to a high level of risk, the security requirements become more stringent. TSA has not provided us with an updated System Security Plan for the TVP that addressed this greater level of risk by including additional NIST requirements for a high- risk system. In addition, this TVP System Security Plan included only about 40 percent of the NIST requirements associated with a medium-risk system. Without addressing all NIST requirements, in addition to those required for a high-risk system, TSA may not have proper controls in place to protect sensitive information. According to federal guidance and requirements, the determination and approval of the readiness of a system to securely operate is accomplished via a certification and accreditation process. On September 30, 2005, the TSA assistant administrator responsible for Secure Flight formally granted authority, based on certification and accreditation results, for the TVP and the Secure Flight application to operate. However, the team performing the certification found that TSA was unsure whether they tested all components of the security system for the TVP and the Secure Flight application, because TSA lacked an effective and comprehensive inventory system. Therefore the certification team could not determine whether its risk assessments were complete or accurate. This team also documented 62 security vulnerabilities for the Secure Flight application and 82 security vulnerabilities for the TVP. The certification team recommended authority to operate on the condition that corrective action or obtaining an exemption for the identified vulnerabilities would be taken within 90 days or the authority to operate would expire. TSA officials stated that these vulnerabilities had been addressed except for three that are being reviewed in a current security audit. TSA has proceeded with Secure Flight development over the past year without a complete and up-to-date program management plan, and without associated cost and schedule estimates showing what work will be done by whom, at what cost, and when. A program management plan can be viewed as a central instrument for guiding program development. Among other things, the plan should include a breakout of the work activities and products that are to be conducted in order to deliver a mission capability to satisfy stated requirements and produce promised mission results. This information, in turn, provides the basis for determining the time frames and resources needed for accomplishing this work, including the basis for milestones, schedules, and cost estimates. TSA has not provided us with either the complete and up-to-date program management plan, or an estimated schedule and costs for Secure Flight. According to a TSA official, an updated program management plan is currently being developed and is about 90 percent complete. In lieu of a program management plan with a schedule and milestones, TSA has periodically disclosed program milestones. However, the basis for and meaning of these milestones have not been made clear, and TSA’s progress in meeting these milestones has not been measured and disclosed. TSA’s SDLC and OMB guidance require that programs like Secure Flight provide risk-adjusted schedule goals, including key milestones, and that programs demonstrate satisfactory progress toward achieving their stated performance goals. In March 2005, we reported that the milestone that TSA set for achieving initial operating capability for Secure Flight had slipped from April 2005 to August 2005. TSA officials stated that TSA revised this milestone to state that instead of achieving initial operating capability, it would begin operational testing. This new milestone subsequently slipped first to September 2005, then to November 2005. Since that time, the program has not yet begun operational testing or initial operations, and TSA has not yet produced an updated schedule identifying when program operations will begin or when other key milestones are to be achieved to guide program development and implementation. Further, while agency officials stated that they are now planning for operational testing of an unspecified capability, no milestone date has been set for doing so. TSA officials stated that they have not maintained an updated program schedule for Secure Flight in part because the agency has not yet determined the rulemaking approach it will pursue for requiring commercial air carriers to submit certain passenger data needed to operate Secure Flight, among other things. Specifically, TSA officials stated that a schedule with key milestones, such as operational testing, cannot be set until after air carriers have responded to the rulemaking and provided their plans and schedules for participating in Secure Flight. The rulemaking has been pending since the spring of 2005, and the rule remains in draft form and is under review, according to TSA officials. Once the rule has been issued, TSA officials stated that air carriers will be given time to respond with their plans and schedules. TSA officials further stated that until this occurs, and a decision is made as to how many air carriers will participate in a yet-to-be-defined initial phase of the program (they are expected to begin incrementally), a program schedule cannot be set. Further, TSA has not yet established cost estimates for developing and deploying either an initial or a full operating capability for Secure Flight, and it has not developed a life-cycle cost estimate (estimated costs over the expected life of a program, including direct and indirect costs and costs of operation and maintenance). TSA also has not updated its expenditure plan—plans that generally identify near-term program expenditures—to reflect the cost impact of program delays, estimated costs associated with obtaining system connectivity with CBP, or estimated costs expected to be borne by air carriers. Program and life cycle cost estimates are critical components of sound program management for the development of any major investment. Developing cost estimates is also required by OMB guidance and can be important in making realistic decisions about developing a system. Expenditure plans are designed to provide lawmakers and other officials overseeing a program’s development with a sufficient understanding of the system acquisition to permit effective oversight, and to allow for informed decision making about the use of appropriated funds. In our March 2005 report, we recommended that TSA develop reliable life cycle cost estimates and expenditure plans for the Secure Flight program, in accordance with guidance issued by OMB, in order to provide program managers and oversight officials with the information needed to make informed decisions about program development and resource allocations. Although TSA agreed with our recommendation, it has not yet provided this information. TSA officials stated that developing program and life cycle cost estimates for Secure Flight is challenging because no similar programs exist from which to base cost estimates and because of the uncertainties surrounding Secure Flight requirements. Further, they stated that cost estimates cannot be accurately developed until after system testing is completed and policy decisions have been made regarding Secure Flight requirements and operations. Notwithstanding these statements, TSA officials stated that they are currently assessing program and life cycle costs as part of their rebaselining and that this new baseline will reflect updated cost, funding, scheduling, and other aspects of the program’s development. While we recognize that program unknowns introduce uncertainty into the program-planning process, including estimating tasks, time frames, and costs, uncertainty is a practical reality in planning all programs and is not a reason for not developing plans, including cost and schedule estimates, that reflect known and unknown aspects of the program. In program planning, assumptions need to be made and disclosed in the plans, along with the impact of the associated uncertainty on the plans and estimates. As more information becomes known over the life of the program, these plans should be updated to recognize and reflect the greater confidence in activities that can be expressed with estimates. Program management plans and related schedules and cost estimates— based on well-defined requirements—are important in making realistic decisions about a system’s development, and can alert an agency to growing schedule or cost problems and the need for mitigating actions. Moreover, best practices and related federal guidance emphasize the need to ensure that programs and projects are implemented at acceptable costs and within reasonable and expected time frames. Investments such as Secure Flight are approved on the expectation that programs and projects will meet certain commitments to produce certain capabilities and benefits (mission value) within the defined schedule and cost. Until an updated program management plan and related schedules and cost estimates and expenditure plans, are prepared for Secure Flight—which should be developed despite program uncertainties, and updated as more information is gained—TSA and Congress will not be able to provide complete oversight over the program’s progress in meeting established commitments. DHS and TSA have executive and advisory oversight mechanisms in place to oversee Secure Flight. As we reported in March 2005, the DHS Investment Review Board (IRB)—designed to review certain programs at key phases of development to help ensure they meet mission needs at expected levels of costs and risks—reviewed the TVP from which Secure Flight will operate, in January 2005. As a result of this review, the board withheld approval for the TVP to proceed from development and testing into production and deployment until a formal acquisition plan, a plan for integrating and coordinating Secure Flight with other DHS people- screening programs, and a revised acquisition program baseline (cost, schedule, and performance parameters) had been completed. Since that time, TSA has not yet addressed these conditions and has not obtained approval from the IRB to proceed into production. DHS officials stated that an IRB review is scheduled to be held in March 2006—14 months after the IRB last met to examine Secure Flight—to review Secure Flight and other people-screening programs, including international prescreening conducted by CBP. Specifically, the board will review the acquisition strategy and progress for each program, focusing, in part, on areas of potential duplication. According to TSA officials, the agency intends to establish a new program cost, schedule, and capability baseline for Secure Flight, which will be provided to the IRB for review. DHS’s Data Privacy and Integrity Advisory Committee also reviewed Secure Flight during the last year. Committee members have diverse expertise in privacy, security, and emerging technology, and come from large and small companies, the academic community, and the nonprofit sector. In December 2005, the committee issued five recommendations on key aspects of the program, including recommendations designed to minimize data collection and provide an effective redress mechanism to passengers who believe they have been incorrectly identified for additional security scrutiny. TSA officials stated that they are considering the advisory committees’ findings and recommendations as part of their rebaselining efforts. In September 2004, TSA appointed an independent working group within the Aviation Security Advisory Committee, composed of government privacy and security experts, to review Secure Flight. The working group issued a report in September 2005 that concluded, among other things, that TSA had not produced a comprehensive policy document for Secure Flight that could define oversight or governance responsibilities, nor had it provided an accountability structure for the program. The group attributed this omission to the lack of a program-level policy document issued by a senior executive, which would clearly state program goals. The working group also questioned Secure Flight’s oversight structure and stated that it should focus on the effectiveness of privacy aspects of the program and, in doing so, consider oversight regimes for federal law enforcement and U.S. intelligence activities. In addition to oversight reviews initiated by DHS and TSA, the DOJ-OIG issued a report in August 2005 reviewing TSC’s role in supporting Secure Flight. In its report, the DOJ-OIG reported that TSC faced several key factors that were unknown with respect to supporting Secure Flight, including when the program will begin, the volume of inquiries it will receive, the number of TSC resources required to respond to these inquiries, and the quality of the data it will have to analyze. In light of these findings, the DOJ-OIG report recommended that, among other things, TSC better prepare itself for future needs related to Secure Flight by strengthening its budgeting and staffing processes and by improving coordination with TSA on data exchange standards. In June 2005, a DOJ- OIG report recommended that TSC conduct a record-by-record review of the TSDB to improve overall data quality and integrity. TSC agreed with all recommendations made. TSA has drafted policy and technical guidance to help inform air carriers of their Secure Flight responsibilities, and has begun coordinating with CBP and TSC on Secure Flight requirements and broader issues of integration and interoperability between Secure Flight and other people- screening programs. However, TSA has not yet provided information and technical requirements that all stakeholders need to finalize their plans to support the program’s operations, and to adequately plan for the resources needed to do so. As we reported in March 2005, key federal and commercial stakeholders— CBP, TSC, and commercial air carriers—will play a critical role in the collection and transmission of data needed for Secure Flight to operate successfully. Accordingly, TSA will need to ensure that requirements for each stakeholder are determined. For instance, TSA will need to define how air carriers are to connect to CBP and what passenger data formats and structures will be used. Although more remains to be done, TSA has worked to communicate and coordinate requirements with stakeholders. For example, TSA has maintained weekly communications with CBP and TSC regarding their roles and responsibilities related to Secure Flight operations. TSA has also begun to address air carriers’ questions about forthcoming Secure Flight requirements. For example, TSA Officials have produced draft air carrier guidance, known as the Secure Flight Data Transmission Plan Guidance (DTPG). The final DTPG is to include guidance to air carriers addressing the following areas: Secure Flight’s mission overview and objectives, project planning phases, aircraft operator operations and airport procedures, technical data requirements, aircraft operator application development, Secure Flight operations, and system maintenance and support. According to TSA officials, air carriers have received copies of a partial draft DTPG, and some air carriers have submitted feedback to Secure Flight’s Airline Implementation and Operations Team that TSA says it is working to address. In addition to drafting guidance, TSA has conducted preliminary network connectivity testing between TSA and federal stakeholders. For example, messages have been transmitted from CBP to TSA and back. However, such tests included only dummy data. According to CBP officials, no real- time passenger data have been used in this testing, and system stress testing has not yet been conducted. Without real-time passenger data, the official said, CBP cannot estimate total capacity or conduct stress testing to ensure the system operates effectively. Further, according to a TSC official, testing has been conducted to show that a data exchange between the TSC and TSA is functioning, but the system has not been stress-tested to determine if it can handle the volume of data traffic that will be required to operate Secure Flight. According to this official, TSA has not specified what these data volume requirements will be. TSA officials acknowledged that they have not yet made this determination and stated that they will not be able to do so until they (1) issue the rule, and (2) have received the air carrier plans for participating in Secure Flight based on requirements identified in the rule. Although CBP, TSC, and air carrier officials we interviewed acknowledged TSA’s outreach efforts, they cited several areas where additional information was needed from TSA before they could fully support Secure Flight. Several CBP officials stated, for example, that they cannot proceed with establishing connectivity with all air carriers until DHS publishes the rule—the regulation that will specify what type of information is to be provided for Secure Flight—and the air carriers provide their plans for providing this information. Similarly, a TSC official stated that TSC cannot make key decisions on how to support Secure Flight until TSA provides estimates of the volume of potential name matches that TSC will be required to screen, as identified above. The TSC official stated that without this information, TSC cannot make decisions about required resources, such as personnel needed to operate its call center. As we reported in March 2005, air carriers also expressed concerns regarding the uncertainty of the Secure Flight system and data requirements, and the impact these requirements may have on the airline industry and traveling public. Air carriers will not be able to begin to modify their passenger data systems to record the data attributes—such as full name and date of birth, which Secure Flight will use to conduct name matching—until TSA determines and communicates which specific data attributes are to be used. Oversight groups that have reviewed Secure Flight agreed that additional work was needed to improve the flow of information to, and coordination with, program stakeholders. In its December 2005 report on Secure Flight, the DHS Data Privacy and Integrity Advisory Committee stated that TSA needs to be clear with air carriers about what information it needs now and what information it may consider requesting in the future, to enable air carriers to avoid sequential revisions of data-handling systems. Also, in September 2005, the Aviation Security Advisory Committee working group expressed concerns about the lack of clarity regarding how Secure Flight will interact with other screening programs. Further, in its August 2005 audit of TSC’s support of Secure Flight, the DOJ-OIG reported that TSC officials believed that their ability to prepare for the implementation of Secure Flight has been hampered by TSA’s failure to make, communicate, and comply with key program and policy decisions in a timely manner, such as the launch date and volume of screening to be conducted during initial implementation. In addition, the report noted that because TSA is unsure about how many air carriers will participate in the initial phase of the program, neither TSA nor TSC can know how many passenger records will be screened, and cannot project the number of watch list hits that will be forwarded to the TSC for action. Finally, the DOJ-OIG report concluded that the shifting of critical milestones—including TSA’s schedule slippages over the past year—has affected TSC’s ability to adequately plan for its role in Secure Flight. Despite TSA’s outreach efforts, stakeholder participation in Secure Flight is dependent on TSA’s effort to complete its definition of requirements and describe these in the rule. Because TSA has not fully defined system requirements, key stakeholders have not been able to fully plan for or make needed adjustments to their systems. In our March 2005 report, we recommended that TSA develop a plan for establishing connectivity among the air carriers, CBP, and TSC to help ensure the secure, effective, and timely transmission of data for use in Secure Flight operations. Although TSA has continued to coordinate with these key stakeholders, at present the agency has still not completed the plans and agreements necessary to ensure the effective support of Secure Flight. In January 2006, TSA officials stated that they are in the early stages of coordinating with CBP on broader issues of integration and interoperability related to other people-screening programs. These broader coordination efforts, which are focused on minimizing duplicative efforts that may exist between the agencies that screen individuals using watch list data and achieving synergies and efficiencies, are important because they may affect how Secure Flight will operate initially and in the future. Specifically, TSA Officials stated that they are coordinating more closely with CBP’s international prescreening initiatives for passengers on flights bound for the United States. The Air Transport Association and the Association of European Airlines—organizations representing air carriers—had requested, among other things, that both domestic and international prescreening function through coordinated information connections and avoid unnecessary duplication of communications, programming, and information requirements. In response to air carrier concerns, and the initiatives of DHS to minimize duplicative efforts, officials from both CBP and TSA explained that they are beginning to work together to ensure that air carriers have a single interface with the government for prescreening both domestic and international passengers. TSA and CBP officials further stated that they will try to use CBP’s network to transmit domestic and international passenger data to and from the air carriers, thus providing the air carriers with a single interface for sending and receiving information. TSA and CBP officials also stated that air carriers should receive a common notification about whether a passenger—domestic or international— requires normal processing, additional screening, or is not permitted to board a plane. However, according to these officials, TSA and CBP have not yet resolved other system differences—such as the fact that their prescreening systems use different passenger data elements, documentation, and name matching technologies—that could lead to conflicting notifications that would instruct air carriers to handle a passenger differently for an international than for a domestic flight. Both TSA and CBP officials agreed that additional coordination efforts are needed to resolve these differences, and stated that they plan to work closely together in developing a prescreening capability for both domestic and international passengers. Decisions made as a result of further coordination could result in changes to the way that Secure Flight is implemented. In addition to coordinating with CBP on international prescreening, TSA faces additional coordination challenges working with TSC. Specifically, according to TSC officials, TSC has an initiative under way to, among other things, better safeguard watch list data. Currently, TSC exports watch list data to other federal agencies, such as TSA and the State Department, for use in these agencies’ screening efforts or processes for examining documents and records related to terrorism. However, TSC is currently developing a new system whereby watch list data would not be exported, but rather would be maintained by TSC. This system, called Query, is to serve as a common shared service that will allow agencies to directly search the TSDB using TSC’s name matching technology for their own purposes. TSC has conducted limited testing of the system. If TSC chooses to use Query, TSA will be required to modify the system architecture for Secure Flight in order to accommodate the new system. According to a TSC official, this effort could be costly. While TSA acknowledged in its draft concept of operations plan in June 2005 that Secure Flight would need to be modified to accommodate TSC’s Query “as necessary,” the agency has not made adjustments to its system requirements or conducted a cost analysis of expected impacts on the Secure Flight program. Rather, TSA has decided that it will continue developing the Secure Flight application, which includes TSA’s name- matching technologies. Thus, TSC will need to export watch list data to TSA to support Secure Flight, once it becomes operational. Several activities are under way, or are to be decided, that will affect Secure Flight’s effectiveness, including how operational testing is conducted, and how data requirements and data accuracy are determined. TSA has been testing and evaluating name-matching technologies for determining what type of passenger data will be needed to match against the TSDB. These tests have been conducted thus far in a controlled, rather than real-world environment, using historical data, and additional testing is needed. In addition, TSA has not made key decisions regarding how the name-matching technologies to be used by Secure Flight will operate or which data will be used to conduct name matching. While TSA is not responsible for ensuring the accuracy of passenger data, the agency must nonetheless advise stakeholders on data accuracy and quality requirements. Another factor that could impact the effectiveness of Secure Flight in identifying known or suspected terrorists is the system’s inability to identify passengers who assume the identity of another individual by committing identity theft, or passengers who use false identifying information. Secure Flight is neither intended to nor designed to address these vulnerabilities. TSA has tested—and continues to test—the effectiveness of one aspect of the Secure Flight system, namely name-matching technologies. These name-matching tests will help TSA determine what passenger data will be needed for the system to match most effectively passenger records with information contained in the TSDB. These tests are critical to defining data requirements and making decisions about how to configure the name- matching technologies. Additional tests will need to be conducted in an operational, real-world environment to fully understand how to configure the system effectively. This is because the name-matching tests conducted to date were conducted in a controlled, rather than real-world, environment—that is, under controlled, or simulated, conditions. For example, TSA used historic air carrier passenger data from June 2004 and historic and simulated watch list data to test the functionality and effectiveness of Secure Flight’s name-matching technologies that match air carrier passenger records with potential terrorists in the TSDB. Additional testing beyond name-matching also needs to be conducted, after TSA rebaselines its program, defines system requirements, and begins adhering to its SDLC. For example, stress and operational testing would help determine whether Secure Flight can process the volume of data expected and operate as intended in an operational environment. As we reported in March 2005, TSA had planned to conduct a series of operational tests consisting of increasingly larger increments of the system’s functionality until the complete system was tested. These tests were to begin in June 2005. However, due to program delays, TSA has not yet conducted this end-to-end testing needed to verify that the entire system, including any interfaces with external systems, functions as intended in an operational environment. TSA also has not yet conducted the stress testing needed to measure the system’s performance and availability in times of particularly heavy (i.e., peak) loads. Recently, TSA documented its overall strategy for conducting these tests and developed draft test plans. TSA officials stated that information about its plans for future testing will be included in its rebaselined program plan. Until this testing is complete, it will not be possible to determine whether Secure Flight will function as intended in an operational environment. Key policy decisions that will influence the effectiveness of Secure Flight in identifying passengers who should undergo additional security scrutiny have not yet been made. These policy decisions include (1) determining the passenger information that air carriers will be required to collect and provide for vetting, (2) the name-matching technologies that will be used to vet passenger data against data contained in the TSDB, and (3) the thresholds that will be set to determine when a passenger will be identified as a potential match against the TSDB. These three decisions, discussed below, are all critical to ensuring that Secure Flight identifies potential terrorist threats as effectively as possible while minimizing the number of potential matches that will require further review by TSA and TSC analysts. (1) Determining the passenger information that air carriers will be required to collect and provide for vetting: TSA needs to decide which data attributes air carriers will be required to provide in passenger data to be used to match against data contained in the TSDB, such as full first, middle, and last name plus other discrete identifiers, such as date of birth. Using too many data attributes can increase the difficulty of matching, since the risk of errors or mismatches increases. Using too few attributes can create an unnecessarily high number of incorrect matches due to, among other things, the difficulty of differentiating among similar common names without using further information. Initial TSA test results have shown that the use of name and date of birth alone might not be sufficient for decreasing the number of false positives—that is, passengers inappropriately matched against data contained in the TSDB. (2) Selecting name-matching technologies used to vet passenger names against the TSDB: TSA must determine what type or combination of name-matching technologies to acquire and implement for Secure Flight, as these different technologies have different capabilities. For example, TSA’s PNR testing showed that some name-matching technologies are more capable than others at detecting significant name modifications, which allows for the matching of two names that contain some variation. Detecting variation is important because passengers may intentionally make alterations to their names in an attempt to conceal their identity. Also, unintentional variations can result from different translations of nonnative names or data entry errors. For example, some name-matching technologies might correctly discriminate between “John Smith” and “John Smythe,” others may not. However, name matching technologies that are best at detecting name variations may also increase the number of potential matches that will have to be further reviewed, which could be offset using a combination of name matching technologies. TSA officials stated in November 2005 that it planned to continuously evaluate the best name-matching technologies or combination of technologies to enhance the system in future iterations. TSA officials recently stated that they had made, but not yet documented, an initial determination regarding the name-matching technologies that will be used for Secure Flight and that they plan to conduct continuous reviews of the name-matching technologies to address circumstances as they arise. (3) Selecting thresholds for determining when a possible name match has occurred: TSA has discretion to determine what constitutes a possible match between a passenger’s data and a TSDB record. For each name that is matched, the name-matching tool will assign a numeric score that indicates the strength of the potential match. For example, a score of 95 out of 100 would indicate a more likely match than a score of 85. If TSA were to set the threshold too high, many names may be cleared and relatively few flagged as possible matches—that is, there is a possibility that terrorists’ names may not be matched. Conversely, if the threshold were set too low, passengers may be flagged unnecessarily, and relatively few cleared through the automated process. As an example of the importance of setting thresholds, during one of the PNR tests conducted, TSA set the name-matching threshold at 80, which resulted in over 60 percent of passengers requiring manual review. Alternatively, when TSA set the threshold at 95, less than 5 percent of the same group of passenger records were identified as requiring further review. With about 1.8 million passengers traveling domestically per day, having a threshold that is too low could produce an unmanageable number of matches— possibly leading to passenger delays—while setting the threshold too high could result in the system missing potential terrorists. Although TSA will not decide how the thresholds should be set until it conducts additional evaluations, it has indicated that the threshold might be adjusted to reflect changes in the terrorist threat level. This would result in Secure Flight flagging more names for potential manual review in order to ensure greater scrutiny in response to changing conditions. TSA plans to finalize decisions on these factors as system development progresses. However, until these decisions are made, requirements will remain unsettled and key stakeholders—in particular air carriers—will not have the information they need to assess and plan for changes to their systems necessary for interfacing with Secure Flight. Air carriers and reservation companies will also not know which additional data attributes they may be required to collect from passengers, to support Secure Flight operations, as reservations are made. These decisions will also directly influence the number of analysts that TSA and TSC will need to manually review potential matches to the TSDB. Accordingly, stakeholders have expressed concern that they have not been provided information about what these decisions are. They stated that they are awaiting additional information from TSA in order to move forward with their plans to interface with and support Secure Flight. Two additional factors that will impact the effectiveness of Secure Flight are (1) the accuracy and completeness of data contained in TSC’s TSDB and in passenger data submitted by air carriers, and (2) the ability of TSA and TSC to identify false positives and resolve possible mistakes during the data matching process, in order to minimize inconveniencing passengers. According to TSA and TSC officials, the data attributes that Secure Flight will require for name matching need to be included in both the passenger data and the TSDB in order for the automated system to effectively match names between the two lists. As we reported in March 2005, while the completeness and accuracy of data contained in the TSDB can never be certain—given the varying quality of intelligence information gathered, and changes in this information over time—TSC has established some processes to help ensure the quality of these data. However, the DOJ-OIG, in its June 2005 review of TSC, found that that the TSC could not ensure that the information contained in its databases was complete or accurate. According to a TSC official, since the time of the DOJ-OIG review, TSC has taken several steps to improve the quality of TSDB records, including conducting a record-by-record review, updating procedures for a daily review of each new or modified record, and using automated rules to check the completeness of records received from other agencies. According to this official, TSA and TSC plan to enter into a letter of agreement that will describe the TSDB data elements that TSC will produce for TSA, among other things, to be used for Secure Flight. However, these data requirements have not yet been determined. In order to obtain accurate and complete passenger data from air carriers, TSA plans to describe the required data attributes that must be contained in passenger data provided to TSA in the forthcoming rule. TSA also plans to issue a final and complete DTPG to specify the data formats and other transmission requirements. However, the accuracy and completeness of the information contained in the passenger data record will still be dependent on the air carriers’ reservations systems and passengers, and the air carriers’ modifications of their systems for transmitting the data in the proper format. These steps are not trivial, as indicated by the June 2004 historical passenger data provided by the air carriers for TSA’s name- matching tests. For these tests, many passenger data records submitted by air carriers were found to be inaccurate or incomplete, creating problems during the automated name-matching process. For example, some passenger data included invalid characters or prefixes, such as “Mr.” and “Mrs.,” in the name fields. Other inaccuracies included invalid characters or prefixes, spelling errors, and inverted birth date information. Additionally, some of the records had omitted or incomplete data elements necessary for performing the automated match or were in an unusable format. In a related effort to address accuracy, TSA and TSC plan to work together to identify false positives as passenger data are matched against data in the TSDB and to resolve mistakes to the extent possible before inconveniencing passengers. The agencies will use intelligence analysts during the actual matching of passenger data to data contained in the TSDB to increase the accuracy of data matches. As indicated in figure 1, when TSA’s name-matching technologies indicate a possible match, TSA analysts are to manually review all of the passenger data and other information to determine if the passenger can be ruled out as a match to the TSDB. If a TSA analyst cannot rule out a possible match, the record will be forwarded to a TSC analyst to conduct a further review using additional information. According to a TSC official, TSA and TSC analysts participated in a tabletop exercises to test the consistency of their respective manual reviews, and found that the matching logic used by both groups of analysts was consistent. This official stated that TSA and TSC also tested their operational procedures, and found gaps in their procedures that are now being addressed. According to this official, TSA and TSC plan to conduct additional joint exercises. Completing these exercises will be important to further understanding the effectiveness of using intelligence analysts to clear misidentified passengers during Secure Flight operations. Another factor that could affect Secure Flight’s effectiveness in identifying known or suspected terrorists is the system’s inability to identify passengers who falsify their identifying information or who commit identity theft. TSA Officials stated that the program is not intended to or designed to protect against the use of falsified identities or to detect identity theft. However, TSA officials stated that the use of commercial data during the name-matching process may help identify situations in which a passenger submits fictitious information such as a false address. In the spring of 2005, a TSA contractor tested the use of commercial data composed of personally identifiable information (such as name and address) to determine, among other things, if such data could be used to increase Secure Flight’s effectiveness in identifying false or stolen identities. However, according to the DHS Data Privacy and Integrity Advisory Committee report, testing performed to date does not provide a reasonable case for utilizing commercial data as part of Secure Flight. TSA officials are not currently pursuing the use of commercial data to support Secure Flight because the fiscal year 2006 DHS appropriations act prohibits TSA from using data or databases obtained from or that remain under the control of a non-federal entity, effectively terminating this type of testing for the duration of fiscal year 2006. Further, TSA officials stated that incorporating biometrics—technologies that can automate the identification of people by one or more of their distinct physical or behavioral characteristics—is not currently envisioned for Secure Flight. As noted in our previous work, biometric technologies, such as fingerprint recognition, are being used in other TSA screening programs. Moreover, the current prescreening process of matching passenger names against no- fly and selectee lists implemented by air carriers also does not protect against identity theft or the use of fictitious identities. TSA is aware of, and plans to address, the potential for Secure Flight to adversely affect travelers’ privacy and impact their rights. However, TSA, as part of its requirements development process, has not yet clearly identified the privacy impacts of the planned system or the full actions it plans to take to mitigate them. Nor has the agency completed its assessment of the potential impact on passenger privacy of the system in an operational environment or defined its redress process for Secure Flight because, in part, the operational plans and system requirements for Secure Flight have not been finalized. TSA officials stated that they are in the process of reviewing new privacy notices that will be issued in conjunction with a forthcoming rule making prior to proceeding with its initial operating capability, and that these notices will also address certain aspects of Secure Flight’s redress process. Until TSA finalizes system requirements and notices, however, privacy protections and impacts cannot be assessed. The Privacy Act and the Fair Information Practices—a set of internationally recognized privacy principles that underlie the Privacy Act—limit the collection, use, and disclosure of personal information by federal agencies. While TSA has reiterated its commitment to meet the requirements of the Privacy Act and the Fair Information Practices, it is not yet evident how this will be accomplished. To begin with, TSA has not decided what data attributes from the PNR it plans to collect, or how such data will be provided by airlines, through CBP, to TSA. Further, according to TSA officials, the agency is in the process of developing but has not issued the system of records notice, which is required by the Privacy Act, or the privacy impact assessment, which is required by the E-Government Act, that would describe how TSA considered privacy in the development of the system and how it will protect passenger data once the system becomes operational. Moreover, privacy requirements were not incorporated into the Secure Flight system development process in such a way that would explain whether personal information will be collected and maintained in the system in a manner that complies with statutory requirements and TSA’s SDLC guidance. One requirement of the privacy impact assessment is that privacy be addressed in the systems development documentation. In addition, TSA’s SDLC guidance acknowledges that privacy protections should be planned for and carried out as part of the system development process. In our review of Secure Flight’s system requirements, we found that privacy concerns were broadly addressed in Secure Flight’s functional requirements, but had not been translated into specific system requirements. For example, the functional requirements stated that the Privacy Act must be considered in the development of the system, but the system requirements documents do not reflect how privacy protections will be supported by the system. Rather, system requirements documents state that privacy requirements are “yet to be finalized.” TSA’s Privacy Officer stated that she has been collaborating with the system development team, but this is not evident in the documents we reviewed. Without taking steps to ensure that privacy protections are built into the system requirements, TSA cannot be assured that it will be in compliance with the Privacy Act once operational, and it runs the risk of repeating problems it experienced last spring. We reported in July 2005 that TSA’s initially issued privacy notices for the Secure Flight data-processing tests did not meet Privacy Act requirements because personal information was used in testing in ways that the agency had not disclosed to the public. We explained that in its fall 2004 notices, TSA had informed the public of its plans to use personal information during Secure Flight testing, including the use of commercial data in a limited manner. However, these initial notices did not fully describe how personal information would be collected, used, and stored for commercial data testing as it was carried out. As a result, individuals were not fully informed that their personal information was being collected and used, nor did they have the opportunity to comment on this or become informed on how they might exercise their rights of access to their information. Although TSA did not fully disclose its use of personal information prior to beginning Secure Flight commercial data testing, the agency issued revised privacy notices in June 2005 to more fully disclose the nature of the commercial tests and address the issues disclosed by us. As we reported in March 2005, until TSA fully defines its operational plans for Secure Flight and addresses international privacy concerns, it will remain difficult to determine whether the planned system will offer reasonable privacy protections to passengers who are subject to prescreening or mitigate potential impacts on passengers’ privacy. At that time, we recommended that TSA finalize privacy policies and issue associated documentation prior to Secure Flight achieving initial operating capability. TSA acknowledged that it needs to publish new privacy notices to cover the collection, use, and storage of personal data for Secure Flight’s initial and full operating capability, before beginning operational testing. TSA officials stated that these privacy notices are currently being reviewed by TSA and DHS and will be released in conjunction with the forthcoming rulemaking. Congress mandates that Secure Flight include a process whereby aviation passengers determined to pose a threat to aviation security may appeal that determination and correct erroneous information contained within the prescreening system. TSA currently has a process in place that allows passengers who experience delays, under the current process run by air carriers, to submit a passenger identity verification form to TSA and request that the agency place their names on a cleared list. If, upon review, TSA determines that the passenger’s identity is distinct from the person on a watch list, TSA will add the passenger’s name to its cleared list, and will forward the updated list to the air carriers. TSA will also notify the passenger of his or her cleared status and explain that in the future the passenger may still experience delays. Recently, TSA has automated the cleared list process, enabling the agency to further mitigate inconvenience to travelers on the cleared list. The Intelligence Reform and Terrorism Prevention Act, enacted in December 2004, directs TSA to include certain elements in its Secure Flight redress policy. Specifically, it requires the establishment of a timely and fair process for individuals identified as a threat to appeal the determination to TSA and correct any erroneous information. It further requires that TSA establish a method for maintaining a record of air passengers who have been misidentified and have corrected erroneous information. To prevent repeated delays of misidentified passengers, this record must contain information determined by TSA to authenticate the identity of such a passenger. In January 2006, TSA officials stated that no final decisions have been made regarding how TSA will address the relevant requirements for redress found in the Intelligence Reform and Terrorism Prevention Act requirements. However, OTSR officials stated that a cleared list will be part of the process. The June 2005 concept of operations describes a process where individuals that are frequently misidentified as being on the TSDB and TSA selectee list can request to be placed on a list of individuals who have been cleared. In our March 2005 report, we recommended that TSA finalize its Secure Flight redress policies and procedures prior to achieving its initial operating capability. Information concerning aspects of the redress process will be published before operational tests or full implementation of the Secure Flight process, and will be contained within the privacy notices that TSA officials stated will be released in conjunction with the forthcoming rulemaking. Moving forward, TSA has assigned a manager to serve as liaison with DHS on privacy and redress issues. TSA has continued its development and testing of Secure Flight, but has made limited progress in addressing longstanding issues related to system development and testing, program management, and privacy and redress protections. To make and demonstrate progress on any large-scale information technology program, such as Secure Flight, an agency must first adequately define what program capabilities, such as requirements related to performance, security, privacy, and data content and accuracy, are to be provided. These requirements can then in turn be used to produce reliable estimates of what these capabilities will cost, when they will be delivered, and what mission value or benefits will accrue as a result. For Secure Flight, well-defined requirements would provide a guide for developing the system and a baseline to test the developed system to ensure that it delivers necessary capabilities, and would help to ensure that key program areas—such as security, system connectivity, and privacy and redress protections—are appropriately managed. When we reported on Secure Flight in March 2005, TSA had committed to take action on our recommendations to manage the risks associated with developing and implementing Secure Flight, including finalizing the concept of operations, system requirements and test plans; completing formal agreements with CBP and air carriers to obtain passenger data; developing life cycle cost estimates and a comprehensive set of critical performance measures; issuing new privacy notices; and putting a redress process in place. Over the past 11 months, TSA has made some progress on all of these areas, including conducting further testing of factors that could influence system effectiveness and corroborating with key stakeholders. However, TSA has not completed any of the actions it had scheduled to accomplish. In particular, TSA has not yet developed complete system requirements or conducted important system testing (including stress testing), fully established security measures, made key decisions that will determine system effectiveness, developed a program management plan and a schedule for accomplishing program goals, or published updated privacy and redress notices. Taken as a whole, this lack of progress indicates that the program has not been effectively managed and is at risk of failure. While we recognize that TSA faces program uncertainties that can directly impact Secure Flight’s development and progress, uncertainty is a component of most programs, and should not be used as a reason for not defining requirements and developing plans and cost estimates, to manage risk. We believe that Secure Flight, like all programs, can utilize best practices to develop such plans to manage program uncertainties. To its credit, TSA has recently taken actions that recognize the need to instill more rigor and discipline into the development and management of Secure Flight, including hiring a program manager with information systems program management credentials. We also support TSA’s efforts to rebaseline the program, including defining system requirements and finalizing a program management plan, including the development of schedules and cost estimates, before proceeding with program development. In fact, proceeding with operational testing and completing other key program activities should not be pursued until TSA puts in place a more disciplined life cycle process and defines system requirements. In the absence of this and other program information, such as requirements, capabilities, and benefits, further investment in this program would be difficult to justify. We are also encouraged that DHS’s IRB—the executive decision making authorities—has scheduled a review of Secure Flight and other people-screening programs. Given the potential duplication with CBP’s new initiatives for international prescreening, DHS, TSA, and CBP need to assess alternative system solutions that should be factored into Secure Flight’s rebaselined program and be the basis for IRB decisions regarding Secure Flight’s future. Notwithstanding these efforts, however, much work remains to be accomplished before Secure Flight is positioned to be properly executed so that informed and prudent investment decisions can be made. Mr. Chairman, this concludes my prepared statement. I will be pleased to respond to any questions that you or other members of the committee have at the appropriate time. For further information about this testimony, please contact Cathleen Berrick, at 202-512-3404 or at berrickc@gao.gov, or Randolph C. Hite at 202-512-6256 or at hiter@gao.gov. Other key contributors to this statement were David Alexander, Amy Bernstein, Mona Nichols Blake, John de Ferrari, Christine Fossett, Brent Helt, Richard Hung, Thomas Lombardi, C. James Madar, Matthew Mohning, David Plocher, Karl Seifert, and William Wadsworth. A system of due process exists whereby aviation passengers determined to pose a threat are either delayed or prohibited from boarding their scheduled flights by TSA may appeal such decisions and correct erroneous information contained in CAPPS II or Secure Flight or other follow-on/successor programs. The underlying error rate of the government and private databases that will be used to both establish identity and assign a risk level to a passenger will not produce a large number of false positives that will result in a significant number of passengers being treated mistakenly or security resources being diverted. TSA has stress-tested and demonstrated the efficacy and accuracy of all search technologies in CAPPS II or Secure Flight or other follow-on/successor programs and has demonstrated that CAPPS II or Secure Flight or other follow-on/successor programs can make an accurate predictive assessment of those passengers who may constitute a threat to aviation. The Secretary of Homeland Security has established an internal oversight board to monitor the manner in which CAPPS II or Secure Flight or other follow-on/successor programs are being developed and prepared. TSA has built in sufficient operational safeguards to reduce the opportunities for abuse. Substantial security measures are in place to protect CAPPS II or Secure Flight or other follow-on/successor programs from unauthorized access by hackers or other intruders. 7. Oversight of system use and operation TSA has adopted policies establishing effective oversight of the use and operation of the system. There are no specific privacy concerns with the technological architecture of the system. TSA has, in accordance with the requirements of section 44903 (j)(2)(B) of title 49, United States Code, modified CAPPS II or Secure Flight or other follow-on/successor programs with respect to intrastate transportation to accommodate states with unique air transportation needs and passengers who might otherwise regularly trigger primary selectee status. Appropriate life-cycle cost estimates, and expenditure and program plans exist. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | After the events of September 11, 2001, Congress created the Transportation Security Administration (TSA) and directed it to assume the function of passenger prescreening--or the matching of passenger information against terrorist watch lists to identify persons who should undergo additional security scrutiny--for domestic flights, which is currently performed by the air carriers. To do so, TSA is developing Secure Flight. This testimony covers TSA's progress and challenges in (1) developing, managing, and overseeing Secure Flight; (2) coordinating with key stakeholders critical to program operations; (3) addressing key factors that will impact system effectiveness; and (4) minimizing impacts on passenger privacy and protecting passenger rights. This testimony includes information on areas of congressional interest that GAO has previously reported on. TSA has made some progress in developing and testing the Secure Flight program. However, TSA has not followed a disciplined life cycle approach to manage systems development, or fully defined system requirements. Rather, TSA has followed a rapid development method intended to develop the program quickly. This process has been ad hoc, resulting in project activities being conducted out of sequence, requirements not being fully defined, and documentation containing contradictory information or omissions. Further, while TSA has taken steps to implement an information security management program for protecting information and assets, its efforts are incomplete. Finally, TSA is proceeding to develop Secure Flight without a program management plan containing program schedule and cost estimates. Oversight reviews of the program have also raised questions about program management. Without following a more rigorous and disciplined life cycle process, including defining system requirements, the Secure Flight program is at serious risk of not meeting program goals. Over the past year, TSA has made some progress in managing risks associated with developing Secure Flight, and has recently taken actions that recognize the need to install more rigor and discipline into the development process. TSA has also taken steps to collaborate with Secure Flight stakeholders whose participation is essential to ensuring that passenger and terrorist watch list data are collected and transmitted to support Secure Flight. However, key program stakeholders--including the U.S. Customs and Border Protection, the Terrorist Screening Center, and air carriers--stated that they need more definitive information about system requirements from TSA to plan for their support of the program. In addition, several activities that will affect Secure Flight's effectiveness are under way, or have not yet been decided. For example, TSA conducted name-matching tests, which compared passenger and terrorist screening database data, to evaluate the ability of the system to function. However, TSA has not yet made key policy decisions which could significantly impact program operations, including what passenger data it will require air carriers to provide and the name-matching technologies it will use. Further, Secure Flight's system development documentation does not fully explain how passenger privacy protections are to be met, and TSA has not issued the privacy notices that describe how it will protect passenger data once Secure Flight becomes operational. As a result, it is not possible to assess how TSA is addressing privacy concerns. TSA is also determining how it will provide for redress, as mandated by Congress, to provide aviation passengers with a process to appeal determinations made by the program and correct erroneous information contained within the prescreening process. However, TSA has not finalized its redress polices. |
DC Courts’ records indicated that total obligations in fiscal years 1996, 1997, and 1998 were $115.4, $119, and $126.3 million, respectively. Fiscal year 1998 obligations reflect our adjustments, as discussed later, and are not comparable to the prior years’ obligations. This is primarily due to the changes resulting from the Revitalization Act of 1997. For example, DC Courts non-judicial employees received federal benefits that increased DC Courts’ obligations for fiscal year 1998. In addition, the adult probation function was transferred from DC Courts to a new entity, the Court Services and Offender Supervision Agency for the District of Columbia (COSA), in fiscal year 1998. DC Courts also provided its non-judicial employees a 7-percent pay raise and assumed responsibility for the judges’ pension costs as part of its fiscal year 1998 appropriation for court operations. Prior to the decision to transfer the adult probation function to a new entity, DC Courts had requested $123.5 million to fund its fiscal year 1998 operations. When DC Courts received $108 million in its fiscal year 1998 appropriation, it no longer had operational responsibility for the adult probation function, but continued to pay salaries and related costs on behalf of the COSA Trustee. In March 1998, the COSA Trustee took over the payments for the operations and subsequently reimbursed DC Courts $7.8 million for the costs DC Courts paid on the COSA Trustee’s behalf. These costs and the related reimbursements were included in DC Courts’ fiscal year 1998 obligations and available funds. Upon receipt of its fiscal year 1998 appropriation, DC Courts was responsible for developing a spending plan based on an appropriation that was about $15.5 million less than it requested. DC Courts did not develop a plan to ensure that its obligations did not exceed available resources. It obligated throughout the year based on its expectation of receiving additional funds. While DC Courts received an additional $1.7 million in appropriated funds for the fiscal year, it did not receive all of the funding it anticipated. DC Courts also received $12.1 million in grants, interest, and reimbursements, including the $7.8 million from the COSA Trustee, during the fiscal year. However, letters between DC Courts and the Office of Management and Budget (OMB) during fiscal year 1998 reflect DC Courts officials’ expectations of receiving additional resources and OMB’s concern that if DC Courts did not lower its rate of spending, its obligations would exceed available funds. For example, in an April 1998 letter, OMB advised DC Courts that it was incurring obligations at a rate that would necessitate a deficiency or supplemental appropriation. For their part, DC Courts officials continued to seek additional funds during their discussions with the COSA Trustee, Department of Justice, and OMB. By the end of the fiscal year, DC Courts’ records showed that obligations exceeded available resources by about $350,000. Specifically, its records showed obligations of almost $122.2 million and funds received of about $121.8 million. However, as I will now discuss, we found that adjustments needed to be made to these amounts. DC Courts deferred more than $4.1 million of court-appointed attorney payments that were eventually paid with fiscal year 1999 funds, but did not record these amounts as fiscal year 1998 obligations. While DC Courts officials had the authority to make these payments with fiscal year 1999 funds, this did not make the deferred payments fiscal year 1999 obligations. The vouchers were approved by the presiding judges or hearing commissioners in fiscal year 1998, and the obligations should have been recorded in fiscal year 1998. Accordingly, we added this amount to DC Courts’ reported fiscal year 1998 obligations. DC Courts treated interest earned primarily from its quarterly apportionments of its appropriation as available budgetary resources for court operations. However, DC Courts did not have authority to spend this interest. For this reason, we have reduced the amount that DC Courts reported as available resources for fiscal year 1998 by $773,000. As adjusted, DC Courts’ recorded obligations and available funding for fiscal year 1998 would be $126.3 and $121 million, respectively, resulting in a potential over-obligation of more than $5 million. The Anti-Deficiency Act prohibits federal and DC government officials from making expenditures or obligations in excess of amounts available in an appropriation or fund unless otherwise authorized by law. The Anti- Deficiency Act requires the head of an agency to report immediately any such violation to the President and the Congress, including all relevant facts and a statement of actions taken. OMB Circular A-34, Instructions on Budget Execution, provides additional guidance on information that the agency is to include in its report to the President. OMB instructs agencies to include the primary reason or cause for the over-obligation, any extenuating circumstances, the adequacy of the system of administrative control of funds, any changes necessary to ensure compliance with the Anti-Deficiency Act, and steps taken to prevent a recurrence of the same type of violation. DC Courts officials told us that they do not believe that a violation of the Anti-Deficiency Act occurred. In essence, DC Courts officials assert that the authority Congress provided in the fiscal year 1999 Appropriation Act to use fiscal year 1999 funds for deferred attorney payments constitutes an exception to the Anti-Deficiency Act. DC Courts officials further assert that the exception is available whenever they have obligations in excess of their budgetary resources. We disagree with this position. The fiscal year 1999 Appropriation Act was enacted after fiscal year 1998 ended. The authority cited by DC Courts only authorizes it to use fiscal year 1999 appropriations to pay deferred amounts to court-appointed attorneys, but does not excuse DC Courts from managing its activities within the appropriation level Congress provided or authorize obligations in excess of available budgetary resources. Accordingly, the critical issue for applying the Anti-Deficiency Act in this case is whether the over-obligations were entirely attributable to the mandatory obligations for court-appointed attorneys and were, therefore, authorized by law. We conclude that they were not, primarily because 1. fiscal year 1998 obligations for court-appointed attorneys were similar to the prior fiscal year and the estimated amount for fiscal year 1998; 2. DC Courts did not base its spending during most of the fiscal year on the appropriation it received; and 3. DC Courts’ records indicated that a discretionary pay raise of about $2.8 million was given to its non-judicial employees during fiscal year 1998. In addition, DC Courts officials told us that they were authorized to retain the interest earned on quarterly apportionments of their appropriation and make it available for court operations. They noted that no statute prohibits retaining interest earned on apportionments. We disagree with this position primarily because the Revitalization Act specifically requires “that all money received by the District of Columbia Courts shall be deposited in the Treasury of the United States or the Crime Victims Fund.” Thus, DC Courts did not have statutory authority to augment its appropriation with interest earned on apportioned appropriations. Recently, DC Courts officials advised us that there were obligations of over $1 million in their fiscal year 1998 records that needed to be de-obligated. DC Courts officials stated that these included amounts that the District should not have recorded as obligations and amounts for services that were no longer anticipated. We are currently reviewing these proposed de- obligations. It will be important that DC Courts continue reviewing its records and do all required investigating and reporting under the Anti- Deficiency Act. Throughout fiscal year 1998, it was clear that unless DC Courts modified its spending or received additional funds, it was facing a shortfall. By the third quarter when DC Courts had not received the additional funds it anticipated, there were limited options available for addressing the projected shortfall. DC Courts officials considered furloughing employees and closing the courts for a period during the summer, as well as deferring court-appointed attorneys’ and expert service providers’ payments. In May 1998, OMB officials advised DC Courts to reduce non-personnel costs instead of furloughing employees or closing the courts to avoid an Anti- Deficiency Act violation. DC Courts made the decision on July 24, 1998, to defer payments for court-appointed attorneys for the remainder of the fiscal year, and then used fiscal year 1999 appropriations to pay those amounts. DC Courts had budgeted $31.6 million for such payments in fiscal year 1998, an amount that was similar to the previous fiscal year, and as of July 1998, $25.8 million had been expended on court-appointed attorney payments. The Congress authorized use of the DC Courts’ fiscal year 1999 appropriation to fund these deferred payments. However, this did not change the payments from fiscal year 1998 obligations to fiscal year 1999 obligations. The presiding judges or hearing commissioners approved the vouchers in fiscal year 1998 and the obligations should have been recorded in fiscal year 1998. Now I would like to discuss the payments that were made to court- appointed attorneys during fiscal year 1998 in terms of the process for making such payments, and whether they were made promptly. Your concern was that court-appointed attorneys were being paid late or not the right amount and that vouchers were sometimes being lost. We found that DC Courts processed vouchers for court-appointed attorneys in accordance with its policies and procedures. However, its procedures did not include time frames for making payments to court- appointed attorneys. Our analysis of DC Superior Court’s fiscal year 1998 paid voucher data through July 1998, showed that 94 percent of the vouchers for court-appointed attorneys and expert service providers were paid within 30 days of the presiding judge’s or hearing commissioner’s approval and 83 percent of these vouchers were paid within 60 days of the date submitted. You were also interested in the incidence of voucher amounts being reduced at the time they are approved by the presiding judges or hearing commissioners. Our analysis of fiscal year 1998 paid voucher data showed that judges or hearing commissioners reduced voucher amounts in 9 percent of the cases, of which more than half involved reductions of $100 or less. DC Courts did not have procedures covering how judges or hearing commissioners were to report to the attorney or expert service provider their decisions to reduce voucher amounts claimed. However, DC Courts officials stated that this information was available to attorneys who requested it. Regarding lost or missing vouchers, we found that there were no procedures for retaining data on the number of vouchers reported as missing or the disposition of such vouchers. DC Courts officials stated that such data were not maintained. I would now like to discuss a matter that did not affect DC Courts’ use of its fiscal year 1998 appropriation for court operations, but that will need to be addressed if DC Courts is to have the requisite authority to make payments out of its Crime Victims Fund. A District law established the Crime Victims Compensation Program under DC Courts jurisdiction prior to the enactment of the Revitalization Act. The Revitalization Act supports the authority of DC Courts to deposit fines, fees, and other money to the credit of the Crime Victims Fund under the District law. The District law provides that payments of up to $25,000 from the Fund can be made to crime victims for shelter, burial costs, or medical expenses. DC Courts’ records indicated that over $1.5 million in such payments were made during fiscal year 1998. However, there is nothing in the language of the District’s fiscal years 1998 or 1999 Appropriation Acts that appropriates amounts from the Crime Victims Compensation Fund, nor have we identified any other federal law authorizing payments from the Fund. Accordingly, we conclude that DC Courts did not have the requisite legislative authority to make payments from the Fund. This is a matter for the Congress and DC Courts to address. Mr. Chairman, this concludes my statement. We will be separately reporting to you on these and other issues that you asked us to review and will include recommendations for addressing the matters discussed in this testimony. I will be happy to answer questions from you or other members of the Subcommittee. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary, VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO discussed the issues related to the District of Columbia (DC) Courts' financial operations for fiscal year (FY) 1998, focusing on: (1) identifying DC Courts' total obligations for fiscal years 1996, 1997, and 1998; (2) whether DC Courts had a spending plan for FY 1998, and whether it obligated funds consistent with available resources; (3) why payments to court-appointed attorneys were deferred between July and September 1998; and (4) whether DC Courts processed payments to court-appointed attorneys in accordance with policies and procedures. GAO noted that: (1) DC Courts experienced difficulties in planning and budgeting during this transition year; (2) DC Courts' records showed that it did not operate within its available resources, potentially in violation of the Anti-Deficiency Act; (3) GAO also identified a legal issue regarding the Crime Victims Compensation Program; (4) DC Courts' records indicated that total obligations in fiscal years 1996, 1997, and 1998 were $115.4, $119, and $126.3 million, respectively; (5) FY 1998 obligations reflect GAO's adjustments, and are not comparable to the prior years' obligations; (6) upon receipt of its FY 1998 appropriations, DC Courts was responsible for developing a spending plan based on an appropriation that was about $15.5 million less than it requested as a result of funding changes under the Revitalization Act and the FY 1998 appropriation act; (7) DC Courts did not develop such a plan or properly monitor spending to ensure that its obligations did not exceed available resources; (8) it obligated throughout the year based on its expectation of receiving additional funds; (9) by the end of the fiscal year, DC Courts' records showed obligations of almost $122.2 million and funds received of about $121.8 million; (10) however, GAO found that adjustments needed to be made to these amounts; (11) as adjusted, DC Courts' recorded obligations and available funding for FY 1998 would be $126.3 and $121 million, respectively; (12) thus, DC Courts potentially over-obligated available funds by more than $5 million; (13) the Anti-Deficiency Act prohibits federal and DC government officials from making expenditures or obligations in excess of amounts available in an appropriation or fund unless otherwise authorized by law; (14) to avoid an Anti-Deficiency Act violation, the DC Courts made the decision to defer payments for court-appointed attorneys for the remainder of the fiscal year, and then used FY 1999 appropriations to pay those amounts; (15) however, since the vouchers were approved by the presiding judges or hearing commissioners in FY 1998, the obligations should have been recorded in FY 1998; (16) DC Courts processed vouchers for court-appointed attorneys in accordance with its policies and procedures; and (17) however, its procedures did not include timeframes for making payments to court-appointed attorneys. |
The CFO Act requires that an agency Chief Financial Officer (CFO) oversee all financial management activities relating to the programs and operations of the agency. Some key CFO responsibilities are: developing and maintaining integrated accounting and financial directing, managing, and providing policy guidance and oversight of all agency financial management personnel, activities, and operations; approving and managing financial management system design and developing budgets for financial management operations and overseeing the recruitment, selection, and training of personnel to carry out agency financial management functions. One of the most important positions under the CFO is the comptroller. The comptroller is the CFO’s technical expert who oversees and manages the day-to-day operations. As such, the comptroller in any agency, including the military services, is a key financial manager. As of October 1, 1996, the Navy had 100 military officers filling key comptroller jobs. These jobs have responsibilities involving a significant range of Navy resources, and are designated to be staffed by officers who range in rank from captain to lieutenant. For example, the comptroller of the Pacific Fleet, billeted for a Navy captain, is responsible for financial management and financial reporting of an annual budget of about $5 billion, comparable in size to a Fortune 500 corporation; whereas a comptroller at a small installation, billeted for a lieutenant, manages an annual budget of about $5 million. “Directs formulation, justification and administration of fiscal and budgetary management policies, plans and procedures. Determines budget and fiscal control policies. Coordinates and approves allocation of funds to programs and organizational units. Develops reports on status of appropriations. Provides required data on utilization of labor, material, and commercial services. Prescribes required methods for budget estimation, fiscal administration, and accounting. Exercises internal control over these systems through administrative and internal activities.” Table 1 shows the 100 comptroller jobs by rank. In November 1995, the Joint Financial Management Improvement Program (JFMIP) published Framework for Core Competencies for Financial Management Personnel in the Federal Government, designed to highlight the knowledge, skills, and abilities that accountants, budget analysts, and financial managers in the federal government should possess or develop to perform their functions effectively. JFMIP stated that federal financial managers need to be well equipped to contribute to financial management activities such as: the preparation, analysis, and interpretation of consolidated financial statements; the formulation/execution of budgets under increasingly constrained resource caps; and the development and implementation of complex financial systems. In defining core competencies needed to effectively perform as a senior accountant and financial manager, which includes positions such as military service comptrollers, JFMIP emphasizes the need for a broad range of knowledge, skills, and abilities, including: accounting education with updated knowledge of accounting principles and federal accounting concepts; knowledge of agency financial statements, internal control environment, and agency business practices; strategic vision for implementation of GPRA and formulation of budgets; resource and program management skills, with knowledge of appropriation structure and agency management control systems; and human resource skills to effectively manage a workforce. These core competencies suggest that individuals filling key comptroller positions in the federal government need to come to their jobs with a broad range of knowledge, skills, and abilities, including a strong foundation of experience and education in accounting. Accordingly, the Office of Personnel Management (OPM) has required that individuals in civilian accounting positions in the federal government, which are in the GS-510 series, meet a minimum qualification standard of 24 semester hours of college-level accounting courses plus an appropriate number of years of experience for the specific position. We recognize that there are always individuals who may lack the educational background desired but who have developed the technical competencies needed through actual experience. However, formal education and technical training are crucial factors in maintaining a professional workforce whether an individual is a warfare officer or a financial manager. The financial management core competencies needed by individuals in comptroller positions require both formal education in accounting and business, and experience in financial management. The Navy has recognized the need to upgrade the knowledge and skills of its individuals in financial management positions. However, unlike the Air Force and the Army, the Navy has no specific career path in financial management aimed at developing needed core competencies for officers in key comptroller positions. “Serious problems exist in many facets of DON financial management...and (we) have responsive improvement plans well under way... Recent changes in law and policy have made this a more demanding task and require staffs to acquire new knowledge and skills.” We agree with the ASN/FM&C that financial management staff need to acquire new knowledge and skills. One of the more critical positions in a strong financial management function is the comptroller. However, we found that the Navy’s present staffing practices for military officers fail to provide a career path for the critically important comptroller function. Under present practices, Navy officers filling fiscal administration jobs, including comptrollers, devote most of their careers to either operational command positions or logistics functions. About half of the key comptroller positions are staffed by line officers and half by officers in the supply corps. Line officers are generally individuals who are eligible to command at sea, and whose primary occupational specialty is surface warfare, aviation, or submarines. Line officers may also include individuals not eligible to command who serve in various operational staff positions. The supply corps officers are considered by the Navy to be the Navy’s business managers and they serve in a wide variety of logistics and financial management positions. By contrast, the Air Force and the Army offer a career path in comptrollership. Under the Air Force’s career program in financial management and comptrollership, many Air Force officers devote their entire careers to financial management. The Army has designed its own unique approach to developing a cadre of financial management officers. All Army officers are required to spend at least the first 5 years of their careers in positions in either comptrollership or one of the operational branches of the Army, such as infantry, artillery, or armor. Army officers can elect to serve in comptrollership positions under one of two programs. In the single track program, an officer can stay exclusively in financial management as a specialty. In the dual track program, an officer can rotate between financial management jobs and command positions in the operational branch. To illustrate, we judgmentally selected and reviewed the career experiences of a Navy captain, an Air Force colonel, and an Army colonel, each currently serving as the comptroller of a major command. Each of these comptrollers carries significant responsibility for the financial management and financial reporting of activities with annual budgets ranging from around $1 to $5 billion. The profiles show that the Air Force and Army comptrollers have significant career experiences that are important in developing core competencies needed by a military comptroller. However, the Navy officer’s profile illustrates a focus on a career as a Navy combat operations officer, rather than on developing competencies needed as a military comptroller. He graduated from a major university with a degree in business. Devoted his first 7 years to junior command positions as a warfare officer, then went to graduate school and obtained a masters degree in business. In the following 14 years, he served in various assignments at sea and in training as a warfare officer, and spent almost 2 years as a plans and policies director for the Joint Chiefs of Staff. He was subsequently appointed commanding officer of a naval station and, 2 years later, became commanding officer of an amphibious group in the Pacific Fleet. After a 26-year career as a warfare officer, this captain was assigned as comptroller of a Navy fleet. He graduated from a major university with a degree in finance. Spent the first 13 years primarily as a budget officer at two bases and an air field, at the U.S. Air Forces Europe, and at the Office of the Air Force Comptroller at the Pentagon. Then, he went to graduate school and obtained a masters degree in business administration. For the next 7 years, he served in various positions, such as, base comptroller and director of budget for a major command. Then he spent 2 years as an executive officer and division chief in the Office of Assistant Secretary of the Air Force for Financial Management and Comptroller (ASFM). Then, for approximately 1 year, he was Director of Accounting and Finance for a major command. Then, he returned to the Pentagon as Director of Budget and Appropriations, ASFM, for about 3 years. After a 27-year career in financial management, he was appointed comptroller of a major command. He graduated from a major university with a degree in finance. Spent the first 5 years as a tank platoon leader and a special services officer, then entered the single track comptrollership series and served as an installation comptroller (resource management officer) and a finance instructor over the next 7 years. During that 7-year period, he obtained a masters degree in business administration with an emphasis in comptrollership. Over the next 5 years, he served as military assistant to the Director of the Office of Management and Budget, White House. Then he was assigned for 4 years to a comptroller billet position at the Office of the Joint Chiefs of Staff, Pentagon. He then served as the Deputy Chief of Staff for Resource Management for an army installation. After a 24-year Army career, with 19 years in financial management, he became the comptroller of U.S. Army, Pacific. We also looked at an Army colonel who was a comptroller of a $4 billion activity. This individual was in the Army’s dual track program. Out of a 25-year career this person spent only 6 years in financial management positions. While most Army officers are in the dual track program, we have not reviewed the Army’s comptroller billets to determine if this Army colonel comptroller is typical. Also, the single track officer may not be representative of Army comptrollers either, but he demonstrates the type of experience one would expect of a comptroller of a major activity. The Navy has staffed its military comptroller positions with individuals who, on average, lack the depth of financial management experience and the accounting education needed for the financial management environment of the 1990s. Line officers, who fill most of the senior-level comptroller positions at the captain and commander ranks, have spent almost their entire careers in command positions such as surface warfare officers, aviators, or submariners. Supply corps officers fill the remaining comptroller positions, and, although they have stronger business-related educational backgrounds and more exposure to financial management activities, most of their careers have been devoted to Navy logistics. Of the 100 key comptroller positions filled by Navy officers in October 1996, 53 were occupied by line officers whose primary career fields were in Navy operational commands, including surface weapons officers, aviators, and submariners. For these officers, a comptroller position offers a temporary shore duty between commands at sea. While these line officers are typically highly educated individuals and have considerable operational experience, they lack both the financial management experience and accounting education needed by a comptroller. These 53 officers present the following profile: They filled mostly senior-level comptroller positions—14 were captains and 25 were commanders. They averaged 17.8 years of commissioned service in the Navy, but only 3.4 years in financial management jobs, including their tenure in their current comptroller position. Only 19 of the 53 (36 percent) majored in accounting or other business-related curriculum as undergraduate students. Thirty-two of the 53 officers (60 percent) obtained masters degrees in a business-related major, but 14 of the remaining 21 officers (26 percent) lacked either undergraduate or graduate education in any business-related field. Our review of a sample of line officers’ college transcripts reveals that they averaged about 12 semester hours of accounting courses, mostly acquired in graduate studies in financial management. Appendix II summarizes the education and experience of the 53 line officers filling comptroller positions in October 1996. Of the 53 line officers in comptroller positions, 43 earned masters degrees, 22 from the Naval Postgraduate School (NPS) in Monterey, California. Based on Navy data, officers selected for NPS spend 18 months in the program at a cost of about $150,000, including salary and benefits. Of the 43 officers with masters degrees, 32 earned their masters in business from either NPS or other participating universities. The NPS degree program in financial management includes approximately 11 semester hours of accounting and has the objective of preparing Navy officers for assignments to positions in budgeting, accounting, business and financial management, and internal control and auditing. However, after graduating with their masters degrees in business, many line officers do not rotate directly to a financial management position where they could immediately apply their education. Navy data on officers serving in comptroller positions show that line officers selected for financial management positions spend only a small percentage of their career in finance. Navy data on a broader universe of all officers who obtain a masters degree in financial management at NPS show that 49 percent of line officers do not use their training for at least 6 years after graduation and 40 percent never use their education in a Navy financial management job. Navy staffing practices are inadequate to ensure that the investment made in postgraduate financial management training is effectively utilized in financial management positions. The remaining 47 of the 100 Navy officers filling comptroller positions on October 1, 1996, were supply corps officers. The Navy defines the mission of the supply corps as providing expertise to the Navy and other Department of Defense (DOD) operations in logistics, acquisition, and financial management, and refers to the cadre of supply officers as the Navy’s business managers. While these officers have careers with more exposure to financial management activities than line officers, many supply officers still lack the depth of experience in fiscal administration and the accounting education needed for comptrollership in today’s complex financial management environment. The 47 supply officers present the following profile. They filled both senior- and mid-level comptroller positions—27 were captains or commanders and 20 were lieutenant commanders or lieutenants. They averaged 16.1 years of commissioned service in the Navy of which 3.4 years were in fiscal-related positions and 5.7 years were in logistics positions that involved some financial management experience. Twenty of the 47 (43 percent) majored in accounting or some other business-related field in undergraduate school. Thirty-one of the 47 officers (66 percent) obtained masters degrees in business-related fields. Our analysis of transcripts for a sample of these officers showed that they averaged about 14 semester hours of accounting. Appendix III summarizes the education and experience of the 47 supply corps officers filling comptroller positions in October 1996. An officer assigned to the supply corps usually will spend his or her career in one of seven occupational groups: 1. fiscal, 2. subsistence, open mess, and bachelors quarters management, 3. transportation, 4. material distribution, 5. procurement, 6. inventory control, or 7. general. Of the seven occupational groups, six are predominantly logistics- oriented, while fiscal assignments can provide Navy officers with experience for developing core competencies needed by comptrollers. The following five job series are included under the fiscal grouping. “Directs supply department activities. Applies supply policies to operation of department. Determines demand in accordance with mission and standard allowance lists. Approves requisitions, balance sheets and summaries. Directs receiving, storage, inventory control, issue and salvage of material. Oversees procurement and sale of goods and services. Administers operation of general mess, including procurement, storage, issue, and inventory of provisions. Conducts disbursing activities in connection with property accountability and transfer, payroll, and personal accounts.” The duties of a general supply officer provide financial management experience to supply corps officers, as indicated by the above description of duties. Other supply officer assignments in logistics specialties also have financial management components, such as budget management. While the logistic positions provide officers with some financial management experience, it is the fiscal administration-type assignment, i.e., budget officer, accountant, or comptroller that best addresses the core competencies needed by key financial managers. Although the Navy does not have a career path in financial management, a few supply corps officers have a career profile that was heavily focused on fiscal assignments. For example, one captain now serving as the comptroller of a major Navy command has 25 years in the Navy, and he has spent 10 of the past 13 years in comptroller positions. However, we believe most of the supply corps officers in comptroller positions would fall short of meeting JFMIP’s core competencies because their career paths have not been concentrated in fiscal administration. As stated earlier in this report, recent reform initiatives aimed at addressing long-standing and severe federal financial management problems, including the CFO Act and GPRA, have placed demands on comptrollers in the 1990s that are substantially greater than in the past. To meet these demands, Navy personnel practices for key comptroller positions need improvement to ensure the development of the core competencies and experience necessary to meet today’s considerable challenges. Conversion of military financial management and other support positions to civilian status was the topic of our October 1996 report. We cited two advantages of conversion to civilian status: (1) dollar savings because civilians are less expensive than military members of equivalent rank, and (2) stability of personnel because of frequent rotation of military staff that rotate in and out of positions. Our report suggested that DOD could save as much as $95 million annually by converting positions occupied by military officers to civilian status. In that report, we identified about 9,500 administrative and support positions that civilians may be able to fill at lower cost and with greater productivity due to the civilians’ much less frequent rotations. Examples of career fields that contain positions that might be converted are information and financial management, which would include comptroller positions. DOD guidance on civilian versus military staffing of positions was written in 1954. It requires that civilians be used to staff positions wherever possible. However, the guidance also provides a high degree of flexibility to DOD by allowing positions to be designated as military essential, and therefore to be filled by an active military officer for any of the following reasons. Required training is only available in the military. The position is needed to maintain combat readiness. The position requires a general military background for successful execution. The law requires that the position be staffed by military personnel. The position must be military in order to maintain good order and discipline or exercise authority under the Uniform Code of Military Justice. The position is needed to ensure adequate opportunities to rotate personnel from overseas locations or sea duty to tours of duty in the continental United States. The position must be military for security reasons in which the incumbent may be involved in combat, expected to use deadly force, or expected to exhibit an unquestioned response to orders. The position requires unusual duty hours that are not normally compatible with civilian employment. Since these guidelines were issued over 40 years ago, the government’s financial management environment and personnel needs have changed substantially, particularly with respect to the need for specialized positions such as comptroller. Increased demands and challenges faced by government financial managers resulting from financial management reform legislation of the 1990s warrants a closer look at staffing these key positions. To identify candidates for conversion in our October 1996 report, we developed criteria based on the above DOD directive and service implementing guidance. The criteria consisted of four questions that reflect the substance of the DOD criteria. Answering “no” to all four questions would be one approach to identifying positions that could be converted to civilian status. The questions were as follows. (1) Is the primary skill or knowledge required in the position uniquely available in the military? (2) Does the position have a mission to deploy to a theater of operations in wartime or during a contingency? (3) Does any law require that the position be staffed by a military person? (4) Is the position needed to support the normal rotation of service members deployed overseas or afloat to assignments in the continental United States? DOD’s response to our October 1996 report acknowledged the potential savings and other advantages of military-to-civilian conversions. DOD also noted impediments to placing civilians in certain positions, such as the lack of consistent funding for the hiring of civilian replacements, the ongoing civilian personnel draw-down, and military strength floors. DOD, in its response to the report’s recommendation, said the issue of military-to-civilian conversion is an important component of DOD manpower requirements determination and the issue is currently being discussed in planning for the Quadrennial Defense Review (QDR). We recognize the difficulties DOD and the Navy face while operating in fiscally constrained times. However, DOD and the Navy should benefit significantly in terms of more efficient and effective operations if a strong comptroller function is established and maintained. A well-educated and experienced cadre of comptrollers, whether military or civilian, is critical to managing a large organization such as the Navy. While DOD anticipates that the QDR will concentrate on identifying methods to overcome the impediments to large-scale military-to-civilian conversions for all the military services, steps need to be taken to address the Navy’s lack of a career path for military comptrollers. As the Air Force and the Army have recognized, financial management and comptrollership is a professional career track that requires highly trained and skilled individuals. In the military combat operations environment one would not expect an officer with only 3 to 4 years experience to command a ship, squadron, or fleet. Similarly, one would not expect a comptrollership, responsible for billions of dollars, to be staffed temporarily by a less than fully experienced financial manager. This would be true whether the comptroller was a military officer or a civilian. However, that in effect is the unintended consequence of the Navy’s present personnel practices with respect to assigning its military officers to comptroller positions. Therefore, if the Navy is to be successful in meeting the objectives of the various governmentwide financial management reform initiatives, it must have a highly skilled and experienced financial management staff in place to help guide and manage its efforts. We recommend that the Secretary of Defense ensure that the following steps are taken by the Navy. Identify which key military comptroller positions can be converted to civilian status in order to gain greater continuity, technical competency, and costs savings. For those comptroller positions identified for conversion to civilian status, ensure that those positions are filled by individuals who possess both the proper education and experience needed to meet the JFMIP core competencies. For those comptroller positions that should remain as military billets, establish a career path in financial management that ensures that military officers are prepared, both in terms of education and experience, for comptrollership responsibilities. In commenting on a draft of this report, DOD generally agreed with the report findings. These comments are summarized below and reprinted in appendix IV. Specifically, DOD agreed that there may be key military comptroller positions that can be converted to civilian status. The Department also recognized the need to fill such positions with individuals who possess the proper education and experience, and supported the report’s message that the Navy needs to strengthen its existing training program for financial management subspecialists. However, DOD did not concur with our third recommendation on establishing a specific career path in financial management. This recommendation is aimed at ensuring that Navy military officers develop the technical competencies needed to be effective comptrollers through training and experience. The Navy does not believe a formal career program in comptrollership is feasible because of the small number of officers in this field combined with a need for extensive experience in fleet operations. While fleet experience may help to develop a better understanding of operational issues, a comptrollership function demands a high level of financial management expertise for an individual to be effective in today’s complex environment. Further, the relative number of military comptrollers is not the issue, rather the issue is that these officers should have the technical competencies necessary to perform in these key Navy comptroller positions. Although DOD did not concur with our recommendation, the Department acknowledged that some naval officers may have been assigned as comptrollers without a strong background in some aspects of financial management. To address this problem, DOD plans to take steps to increase the number of tours or months of experience required to become a financial management subspecialist and upgrade all comptroller billets to proven subspecialist billets. These steps should increase the amount of experience that Navy officers bring to the comptroller positions. However, the Navy needs to ensure that its comptroller positions are filled with individuals who bring a strong background of financial management experience to those positions. We are concerned that simply increasing the number of months necessary to qualify as a subspecialist or adding a tour of duty, though a positive step, will not fully achieve the desired goal. We continue to believe that a career path, similar to the Air Force or Army, is the best approach. We are also pleased that the Navy plans to enhance its training for military officers who will serve in comptroller positions. A critical aspect of such training is that officers completing the course should be assigned to a comptroller position within a relatively short period of time so that the benefits of the training are not lost before being put into application for the benefit of the Navy. As noted in this report, utilization of financial management training by Navy officers has been a problem in the past because many years elapsed between completion of training and an assignment to a key financial management position. As agreed with your office, unless you publicly announce the contents of this report earlier, we will not distribute it until 30 days from the date of this letter. At that time, we will send copies of this report to the Chairmen and Ranking Minority Members of the Senate Committee on Governmental Affairs and the House Committee on Government Reform and Oversight and other interested committees. We will also send copies to the Secretaries of Defense and the Navy and the Director of the Office of Management and Budget. Copies will be made available to others upon request. If you have any questions about this report, please contact me at (202) 512-9095. The major contributors to this report are listed in appendix V. We identified the Navy’s military comptroller billets by interviewing Bureau of Naval Personnel officials and reviewing Navy staffing policy and procedures manuals. We obtained a database from the Bureau of Naval Personnel on Navy officers who were in financial management positions. Using this database, we identified the universe of military officers in comptroller positions as of October 1, 1996. We also used this database to document the formal education and experience of these officers. We supplemented the database information by reviewing microfiche records which contained detailed career histories and college transcripts for each officer. We interviewed officials at the Bureau of Naval Personnel and met with selected Navy comptrollers to obtain a detailed understanding of Navy staffing practices and Navy recordkeeping systems. We identified 191 military comptroller (code 1050) billets as of October 1, 1996. Further analysis showed that 91 of the 191 comptrollers were in either the Medical Service Corps or Civil Engineering Corps. We excluded the 89 medical corps officers from our analysis because (1) medical comptrollers perform specialized duties that are closely related to the field of health care administration and (2) funding in this area represented only about 1 percent of the Navy’s budget. We also excluded the two civil corps officers to maintain a clear distinction between the line officers and supply officers who were the focus of our review. Based on the data provided by the Navy, we profiled the career experiences, in terms of education and assignment history, of the remaining 100 Navy officers filling comptroller positions. We segregated these officers for purposes of analysis into line officers and supply officers to assess if there were any differences in educational background and financial management experiences due to a career track. Further, to illustrate the possible disparities in the financial management experiences of comptrollers representing the three military services, we judgmentally selected for analysis senior officers representing the Navy, Air Force, and Army. These individuals were chosen based solely on whether the officer was the comptroller of a major command—in the $1 to $5 billion dollar budget range. However, this assignment was principally focused on the analysis of the qualifications of Navy officers in key comptroller positions. As such, we did not review the profiles of all Air Force and Army officers in key comptroller positions. This review excluded any analysis of civilians in comptroller positions because we have a broader review underway that will analyze the education and experience of key financial managers throughout DOD. We conducted our work from July 1996 to March 1997 in accordance with generally accepted government auditing standards. We requested written comments on a draft of this report from the Secretary of Defense. DOD provided us with written comments. These comments are discussed in the “Agency Comments and our Evaluation” section and are reprinted in appendix IV. The following is GAO’s comment on the Department of Defense’s letter dated April 18, 1997. 1. Discussed in the “Agency Comments and Our Evaluation” section. Richard L. Harada, Senior Evaluator Karlin I. Richardson, Senior Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed opportunities to improve the experience and training of key Navy comptrollers, focusing on: (1) personnel practices and the education and experience of Navy officers serving in comptroller positions; and (2) options for strengthening these practices. GAO noted that: (1) the Navy's personnel practices do not provide a career path for Navy officers to develop and maintain the core competencies needed by a comptroller; (2) by contrast, the Air Force and the Army offer a career path in comptrollership; (3) because of the Navy's approach, many officers in key comptroller positions lack the financial management experience and the accounting education needed to meet the demands of today's financial management environment; (4) slightly more than half of the Navy's key comptroller positions are filled by line officers whose primary occupation in the Navy is in surface warfare, submarines, aviation, or operational staff positions; (5) these officers averaged 17.8 years of commissioned service in the Navy, but only 3.4 of those years had been spent in any financial management position, including their current comptroller job; (6) about 60 percent of the line officers had obtained masters degrees in business-related majors, but due to Navy personnel practices, many did not utilize their financial management education until several years after graduation and generally served in a comptroller position for only one tour in their career; (7) about 26 percent of the line officers serving as comptrollers had no college degree in any business-related field; (8) supply corps officers, while more qualified from a formal education perspective than line officers for comptroller positions, generally lacked the depth of experience needed by a comptroller for the 1990s and beyond; (9) most of the supply officers held a college degree at the bachelors or masters level in accounting or business, but few had substantial experience in Navy fiscal administration assignments involving such roles as budget officer, accountant, or comptroller; (10) they averaged 16.1 years of commissioned service in the Navy of which 3.4 years were in fiscal administration and 5.7 years were in logistics positions that involved some financial management experience; and (11) in a few cases, senior supply corps officers had as much as 10 years experience in fiscal administration. |
When the Medicare program was established in 1965, it only covered health care services for the diagnosis or treatment of illness or injury. Preventive services did not fall into either of these categories and, consequently, were not covered. Since 1980, the Congress has amended Medicare law several times to add coverage for certain preventive services for different age and risk groups within the Medicare population. (See table 1.) For most of these services, Medicare requires some degree of cost-sharing by beneficiaries, although most beneficiaries have additional insurance, which may cover most, if not all, of these cost-sharing requirements. Some services, such as pneumonia and flu shots and the fecal-occult blood test for colorectal cancer, have no cost-sharing requirements. Many other preventive services exist besides those specifically covered as preventive services under Medicare, such as blood pressure screening and cholesterol screening. Although Medicare does not explicitly provide coverage for these other services, Medicare beneficiaries may receive some of them during office visits for other medical problems. Data from surveys of Medicare beneficiaries indicate that the receipt of such services is common. For example, in 1999, nearly 98 percent of seniors reported that they had had their blood pressure checked within the last 2 years, and more than 88 percent of seniors reported having their cholesterol checked within the prior 5 years. At least a portion of these services were likely ordered by physicians in order to diagnose the causes of medical problems, and were paid for by Medicare as such. To identify how best to increase use of preventive services needed by the Medicare population, CMS sponsors reviews of studies that examine various kinds of interventions that have been used in the past for populations age 65 and older. CMS also takes action to implement interventions in each state through its Peer Review Organization (PRO) program. Under this program, CMS contracts with 37 organizations responsible for each state, U.S. territory, and the District of Columbia. The PRO program, which is designed to monitor and improve quality of care for Medicare beneficiaries, currently includes the goal of increasing the use of flu and pneumonia immunizations, as well as breast cancer screening, in each state. These organizations collaborate with hospitals and health care professionals, suggesting systemic changes to improve how preventive services are provided. CMS also conducts a variety of health promotion activities to educate beneficiaries about the benefits of preventive services and to encourage their use. These include the publication of brochures on certain covered services and media campaigns. Use of preventive services offered under Medicare has increased over time. Some services are used more extensively than others, and use of individual services varies by state and, to a lesser extent, by demographic characteristics such as ethnicity, income, and education. Although opportunities remain to increase the use of preventive services within Medicare, there are limits to the extent some beneficiaries would be expected to use certain services. Information on usage for 4 of the 10 preventive services covered under Medicare is available in the data we used—immunizations against pneumonia and flu and screening for cervical and breast cancer. This information shows that beneficiaries age 65 and older are increasing their use of all 4 services. (See table 2.) For example, 68 percent of beneficiaries received flu shots in 1999, compared with 60 percent in 1995. In 1999, although each preventive service was used by the majority of Medicare beneficiaries, fewer receive multiple preventive services. For example, 1999 data show that while 91 percent of female Medicare beneficiaries received at least 1 preventive service, only 10 percent of these beneficiaries were screened for cervical, breast, and colon cancer, as well as immunized against flu and pneumonia. These data also show that 44 percent of male beneficiaries were immunized against both flu and pneumonia. When colorectal screening is included in this set of services, the proportion of men who had received all 3 services falls to less than 27 percent. While national rates provide an overall picture of current use, they mask substantial differences in how seniors living in different states use some services. For example, the national breast cancer screening rate for Medicare beneficiaries was 75 percent in 1999, but rates for individual states ranged from a low of 66 percent to a high of 86 percent. In table 3, we show the range over which state estimates of preventive service usage rates vary from lowest to highest for selected states. While usage rates for each service varied from state to state, the services with the highest rates in each state were generally the same. For example, in most states, screening rates for breast and cervical cancer were higher than rates for colorectal screens. Usage rates for Medicare beneficiaries also varied based on ethnicity, and on socioeconomic status, as defined by income and education. By ethnicity, the biggest differences occurred in use of immunization services. For example, 1999 data show that about 57 percent of whites and 54 percent of “other” ethnic groups were immunized against pneumonia, compared to about 37 percent of African Americans and Hispanics. Similarly, about 70 percent of whites and “other” ethnic groups received flu shots during the year compared to 49 percent of African Americans. The only other statistically significant difference between ethnic groups was for the fecal-occult blood test for colon cancer, for which 26 percent of whites received screenings within the past year compared to 16 percent of Hispanics and “other” ethnic groups. For income and education, in general, as income and education rose, the rates at which individuals used preventive services also increased. (See table 4.) Various studies have identified a variety of factors affecting beneficiary decisions to seek preventive care, including low patient awareness of the benefits of the services as well as the need for service. Some factors, such as those involving patient awareness of the benefits, may represent opportunities to increase the use of preventive services. For example, see the following. In a 1997 report, the Agency for Healthcare Research and Quality found that, although patients may be unaware of the risks or symptoms of colorectal cancer, they are more likely to participate in screening once they understand the nature and risks of the disease. Data from CMS’s 1999 Medicare Current Beneficiary Survey show that, while about one-fourth of beneficiaries who did not receive flu shots were unaware of the benefits of obtaining this immunization, about half of the people who were not immunized avoided getting the shot for reasons such as concerns about side effects and whether doing so would effectively prevent illness. On the other hand, usage rates alone may not provide a clear picture of success, and may mask inherent limitations to increasing usage rates. For example, survey data show that 44 percent of women age 65 and over have had hysterectomies—an operation that usually includes removing the cervix. For these women, researchers state that cervical cancer screening may not be necessary unless they have a prior history of cervical cancer. Also, according to officials in charge of research on preventive services at the National Institutes of Health, it is reasonable for beneficiaries, their families, or their providers to decide to forgo services because of the limited benefits they would offer patients with terminal illnesses or of advanced age. These officials explained that research has shown, for example, that the benefits of cancer screening services, such as for prostate, breast, and colon cancer, can take 10 years or more to materialize, a time frame that could exceed the life expectancy of as much as half of the Medicare population. CMS officials also pointed out that the controversy over the effectiveness of some services, such as mammography and prostate cancer screening, may add to the difficulty in further improving screening rates for these services. The benefit of mammography has recently been challenged by two Danish researchers and an independent group of experts on the National Cancer Institute’s (NCI) advisory panel citing serious flaws in 6 of the 8 clinical trials that showed benefits. However, subsequent to the Danish report and the NCI panel’s statement, both the NCI and the U.S. Preventive Services Task Force reiterated their recommendation for regular mammography screening. While acknowledging the methodological limitations in these trials, the U.S. Preventive Services Task Force concluded that the flaws in these studies were unlikely to negate the reasonable, consistent, and significant mortality reductions observed in these trials. Routine screening for prostate cancer is also a matter of controversy. For example, the American Cancer Society and the American Urological Association support routine prostate cancer screening, while the U.S. Preventive Services Task Force and others state that there is insufficient evidence to support it. CMS has studied various types of interventions to increase the use of preventive services among seniors. These studies show that many types of interventions can potentially be effective, but also that interventions must be tailored to the circumstances of specific situations. CMS is funding efforts in every state to implement interventions for three preventive services that Medicare covers. CMS also has efforts under way aimed at increasing the use of preventive services among minority and low-income seniors. CMS has sponsored reviews of studies looking at the effectiveness of interventions to increase use of preventive services among people age 65 and older. One of these reviews evaluated the effectiveness of interventions targeting people over age 65 for five services covered by Medicare—immunizations for flu and pneumonia and screenings for breast, cervical, and colon cancer. The report evaluated 218 separate studies on interventions designed to increase use of preventive services. The studies were performed in both academic and nonacademic settings in various geographic areas, and in a mixture of reimbursement systems. Most of the interventions studied that involved pneumococcal and influenza immunizations were targeted toward persons over 65 years of age, while cancer screening interventions were targeted at adults, but not necessarily those 65 years of age. This evaluation concluded that no specific intervention was consistently most effective for all services and settings, and that success depended on how closely the intervention addressed the unique circumstances in each state and for different populations within each state, while also taking into account the cost and difficulty of implementation. Obstacles to improved screening rates can differ across states thus requiring different approaches. For example, officials responsible for improving the use of preventive services in Idaho and Washington explained that while a significant barrier in Idaho was beneficiary access to Medicare providers, this was not a barrier in Washington. The CMS evaluation also showed that using multiple interventions generally provided greater success than using a single approach. The types of interventions evaluated in the CMS-sponsored review included a variety of efforts targeting health delivery systems, providers, and patients. The key conclusion the report drew from the literature was that organizational and system change, such as the use of standing orders and the use of financial incentives, were the most consistent at producing the largest increase in the use of preventive services. These and other interventions found to be effective follow. System Change. These interventions change the way a health system operates so that patients are more likely to receive services. For example, medical or administrative staff may be given responsibility to ensure that patients receive services, or standing orders may be implemented in nursing homes to allow nonphysician personnel to administer immunizations without a physician’s order. Incentives. These interventions include gifts or vouchers to patients for free services. Medicare allows this type of approach only in limited circumstances. Reminders. These interventions include computer-generated or other approaches by which medical offices (1) reminded physicians to provide the preventive service as part of services performed during a medical visit or (2) generated notices to patients that it was time to make an appointment for the service. Studies show that reminders to either patients or physicians can effectively improve rates for cancer screening. However, a computerized provider reminder is consistently more cost effective than notifying the patient directly when a computerized information system is already available in a physician’s medical office. Patient reminders that are personalized or signed by the patient’s physician are more effective than generic reminders. Education. These interventions include pamphlets, classes, or public events providing information for physicians or beneficiaries on coverage, benefits, and time frames for services. The study found that while the effect of patient education is significant, it is consistently less effective than system change, incentives, or reminders. CMS is implementing interventions in all states through its PRO program. Under this program, CMS contracts with 37 PROs, each responsible for monitoring and improving the quality of care for Medicare beneficiaries in one or more states, in U.S. territories, or in the District of Columbia. These efforts are currently aimed at three preventive services offered under Medicare—immunizations against flu and pneumonia and screening for breast cancer. CMS chose these topics based on their public health importance and other factors. CMS also contracts with select PROs to provide support and assistance to all PROs for each area of focus. For example, CMS has contracted with two of the existing PROs, one for flu and pneumonia immunizations and one for breast cancer screening, to provide support and share information among the PROs regarding their efforts to improve usage rates for these services. Our discussions with the officials from these two PROs indicate that, for immunizations, most PROs are focusing on ways to better educate patients and providers on the importance of getting flu and pneumonia shots. For breast cancer screening, efforts are focusing on developing integrated reminder systems, such as chart stickers or computer-based alerts that physicians’ offices can use to contact patients on a timely basis. Officials for the two PROs providing support indicated that most PROs were implementing multiple interventions. For example, in a newsletter intended to help PROs share information, officials at one PRO reported that they have developed concurrent breast cancer screening interventions for their state, which are targeted at physicians and their staffs, nurses, and beneficiaries. Officials for this PRO report the following. For physicians and their staffs, they (1) host seminars to teach them about reminder and billing systems, (2) provide toolkits that include reminder systems, checklists, and other materials, and (3) conduct on-site consultations to encourage providers to implement system changes. For nurses, they are conducting a campaign intended to increase awareness and encourage nurses and student nurses to identify female friends and family members who are overdue for mammograms. The campaign includes information packets, a newsletter, and information booths at nursing organization meetings. For beneficiaries, the PRO publishes a periodic newsletter on the subject of preventive medicine. This newsletter includes articles on the importance of mammography for early detection of breast cancer. CMS has taken steps to evaluate the success of PRO efforts. CMS officials explained that the contracts with the PRO organizations are “performance based” and provide financial incentives as a reward for superior outcomes. The contracts include a methodology in which the performance of the PRO for each state, U.S. territory, and the District of Columbia is scored based on 22 indicators, including flu and pneumonia vaccination rates and mammography rates. The performance of the PRO in each state will then be ranked against all other states in order to identify the higher and lower performing PROs. CMS intends to automatically renew the contracts with the top 75 percent of the PROs for the next contract cycle, which begins in 2002. The PRO contracts also contain financial performance incentives allowing each PRO to receive up to an additional 2 percent payment based on the positive outcomes of their interventions. CMS officials expect information on the results by the summer of 2002. Consequently, we have not assessed the outcome of PRO efforts or CMS’s methodology for measuring PRO performance. While the current efforts include 3 of the 10 preventive services covered by Medicare, CMS is also developing indicators and performance measures necessary for interventions to increase use of screening services for osteoporosis and colorectal and prostate cancer. CMS officials stated that such interventions would be implemented in future contracts with PROs. CMS is not currently developing indicators for the remaining preventive services covered by Medicare—hepatitis B immunizations or screenings for glaucoma and vaginal cancer. CMS is also sponsoring PRO interventions and projects in each state to increase use of preventive services by minorities and low-income Medicare beneficiaries. CMS-funded research on successful interventions for the general Medicare population 65 and older concluded that evidence was insufficient to determine how best to increase use of services by minorities and low-income seniors across various geographic settings. Differences in how populations use preventive services are sometimes found even when the populations have similar geographic settings or delivery systems. For example, a study showed that although use of flu shots among white and African American seniors is higher under managed care than fee-for-service, the significant disparities in levels of use between these ethnic groups persist in both these environments. To begin addressing these information gaps, CMS requires that each PRO conduct a project focusing on one of several specified Medicare populations. This population can be low-income seniors enrolled in both Medicare and Medicaid or one of several minority groups: American Indians, Alaska Natives, Asian Americans and Pacific Islanders, African Americans, or Hispanics. For the population chosen, the PRO is to target interventions for one service. The projects in most states are focusing on increasing breast cancer screening or flu and pneumonia immunization among African American or low-income seniors. PROs are required to identify the barriers that exist for the selected population and service, and to implement interventions specifically designed to address these barriers for patients and providers. A summary of PRO efforts to increase services for minorities and low-income seniors is expected to be published sometime after the spring of 2002. Other studies or projects under way by CMS also aim to identify barriers and increase use of services by certain Medicare populations. For example, the Congress directed CMS to conduct a demonstration project to, among other things, develop and evaluate methods to eliminate disparities in cancer prevention screening measures. The law specifies a total of nine demonstration projects to include two state-level demonstrations for each of four minority groups (American Indians, including Alaska Natives, Eskimos, and Aleuts; Asian Americans and Pacific Islanders; African Americans; and Hispanics) and one project in the Pacific Islands. In addition, one of the projects must have a rural focus and one must have an urban focus for each group. CMS expects to produce a report by December 2002, after the project’s first phase is completed, identifying best practices and models to be tested in demonstration projects. The second phase, which is to start around December 2002, is to test these models by implementing them in actual demonstration projects intended to determine which methods are most effective in reducing the incidence of cancer and improving minority health by overcoming barriers to the use of preventive services in the target populations. A report evaluating the cost effectiveness of the demonstration projects, the quality of preventive services provided, and beneficiary and health care provider satisfaction is due to the Congress in 2004. We obtained comments on our draft report from CMS. CMS commented that the draft report focused on the activities of its PROs and did not consider all of CMS’s health promotion activities. CMS provided details on its publication and educational campaigns to inform Medicare beneficiaries about preventive service benefits and to encourage their use. CMS’s comments are reproduced in appendix I. We acknowledge that our report does not describe all of CMS’s health promotion/education activities underway that relate to increasing the use of preventive services among the Medicare population. While beneficiary education activities are worthwhile, CMS studies have shown that other interventions, such as those that are directed at changing the way a health delivery system operates so that patients are more likely to receive services, are more effective. Because PROs and CMS demonstration projects are accountable for facilitating the implementation of these types of interventions, we focused our efforts in describing these activities and the status of their evaluations. We have revised the report to make it clear that PRO activities are in addition to other CMS beneficiary education efforts. CMS also provided technical comments that we considered and incorporated where appropriate. As arranged with your office, unless you release its contents earlier, we plan no further distribution of this report until 30 days after its issuance date. At that time we will send copies of this report to the secretary of health and human services, the administrator of the Centers for Medicare and Medicaid Services, the director of the Centers for Disease Control and Prevention, and others who are interested. We will also make copies available to others on request. If you or your staff have any questions, please contact me at (202) 512- 7119, or Frank Pasquier at (206) 287-4861. Other major contributors are included in appendix II. Other major contributors to this report include Lacinda Ayers, Matthew Byer, Jennifer Cohen, Jennifer Major, Behn Miller, and Stan Stenersen. | Preventive medicine, including immunizations for many diseases and screening for some types of cancer, holds the promise to extend and improve the quality of life for millions of Americans. Medicare now covers three preventive services for immunizations and three for screenings, and the Centers for Medicare and Medicaid Services (CMS) sponsors "interventions" to increase the use of preventive services. GAO found that the use of preventive services varies widely by service, state, ethnic group, income, and education. The greatest differences among ethnic groups were for immunization rates. Cancer screening rates tended to differ according to income and education level. CMS pays for interventions that promote breast cancer screenings and pneumonia and flu shots. Most of the techniques being used, such as reminder systems that medical offices can use to alert doctors and patients to needed cancer screenings, have been effective. CMS is evaluating what its current efforts have accomplished and expects the results later this year. |
EEOICPA has two major components. The Department of Labor (DOL) administers Subtitle B, which provides eligible workers who were exposed to radiation or other toxic substances and who subsequently developed illnesses, such as cancer and lung disease, a onetime payment of up to $150,000 and covers future medical expenses related to the illness. The benefits are payable from a compensation fund established by EEOICPA. Subtitle B is not covered in this report. Prior to October 2004, Energy administered Subtitle D to help its contractors’ employees file state workers’ compensation claims for illnesses determined by a panel of physicians to have been caused by exposure to toxic substances in the course of employment at an Energy facility. This report covers payments made to administer Subtitle D. To facilitate outreach to potential claimants and to help claimants obtain work and medical records to initiate claims under EEOICPA, Energy established 11 regional resource centers. These resource centers were a gateway for claimants applying for assistance under EEOICPA under both Subtitle D, administered by Energy, and Subtitle B, administered by DOL. Energy and DOL shared the resource centers’ costs of operation, staffing, and training. To achieve this, DOL reimbursed Energy for about half of the costs of its contract with Eagle Research Group, Inc., the company that staffed and operated most of the resource centers. Additionally, DOL reimbursed Energy for a portion of other costs Energy paid directly, such as those for the leased space for the centers. After EEOICPA claims were received through the resource centers and headquarters, Energy requested its field offices to locate records that would support the claims, such as employment, medical treatment, and toxic substance exposure records. Energy forwarded the information collected to claim developers and various assistants who assembled the information into case files. A panel of physicians reviewed the case files to determine whether exposure to a toxic substance during employment at an Energy facility was at least as likely as not to have caused, contributed to, or aggravated the claimed medical condition. In addition to the panel physicians, other doctors performed quality assurance checks of the case files before the claims were submitted to the physician panels and again after the physician panels had made recommendations. All panel determinations were finalized by a medical director employed by Energy. Energy communicated with applicants through an EEOICPA hotline and through letters. Energy began accepting applications for Subtitle D in July 2001 when the majority of the resource centers opened, and began developing cases in the fall of 2002 when its final administrative rule took effect. While Energy got off to a slow start in processing cases, completing only 6 percent of approximately 23,000 cases by December 31, 2003, Energy later increased claim development activities, which resulted in a backlog of claims awaiting review by the physician panels. In June 2004, Energy transferred $21.2 million in funds to OWA in an effort to clear the backlog of claims. During the same time, it increased the number of case developers and physicians serving on the review panels. Legislation was also moving through Congress as early as June 2004 to transfer the administration of Subtitle D from Energy to DOL. Ultimately, in October 2004, Congress repealed Subtitle D and created Subtitle E, to be administered by DOL. In light of the potential transfer, Energy ceased hiring new case developers in August 2004, then gave official instruction to cease claims processing in November 2004. Energy received $112.6 million in appropriated funds (including transfers) through fiscal year 2005 for its EEOICPA activities and spent over $92 million. Energy’s field offices continue to research claims that are now processed by DOL under Subtitle E. See figure 1 for a time line of significant OWA program events. Under Subtitle D of EEOICPA, Energy’s role was to assist applicants in pursuing state workers’ compensation benefits but not to pay any benefits to the applicants. Therefore, the costs associated with Energy’s EEOICPA activities are administrative costs only. We analyzed Energy’s program costs by major program activity, as shown in table 1. Through multiple contracts in some cases, four major contractors performed the majority of OWA’s program activities. Eagle Research Group, Inc. (Eagle), staffed and operated the resource centers from September 2001 through February 2005 under time and materials task orders issued under a GSA Federal Supply Schedule (FSS) contract. Westwood Group, Inc. (Westwood), administered the physician panels, provided a quality-assurance check on claims, managed the EEOICPA hotline, and coordinated the field office research requests. Additionally, Westwood provided certain other administrative services. Energy obtained Westwood’s services through two time and materials task orders issued under a GSA FSS contract. One task order was in effect from August 2001 through February 2005. The other began in September 2004 and can be extended through September 2009 if Energy exercises the four option periods. Under the option periods and current statement of work, Westwood would continue its analytical services relating to the EEOICPA claims research and other administrative activities for Energy’s Office of Environment, Safety, and Health (ES&H). Technical Design, Inc. (TDI), provided administrative personnel as well as analysts trained in environment and health issues. TDI provided services to OWA under three consecutive contracts issued by Energy. All three were cost reimbursement contracts that contained performance incentives. The first contract was described by Energy as a cost plus incentive fee. The second and third contracts were cost plus award fee. Westwood also provided additional services to OWA through TDI under these contracts. In addition to services provided to OWA, both Westwood and TDI also provided other services to Energy’s ES&H. On their monthly invoices, Westwood and TDI identified OWA services separately from other ES&H services. SEA, under its first task order, provided information technology services to create, develop, and maintain the Case Management System to track the progress of individual cases. Under subsequent task orders, services broadened over time so that SEA provided case developers and assistants who performed case processing activities. SEA ultimately provided services equal to approximately one-third of OWA’s program costs. In January 2004, Sidarus, Inc. (Sidarus), purchased SEA. In June 2004, Sidarus was renamed Apogen Technologies, Inc. SEA continues to do business as SEA. SEA’s services were initially obtained by Energy through a memorandum of agreement (referred to in this report as an interagency agreement) between Energy and SSC NOLA. To implement the interagency agreement, SSC NOLA used GSA’s Federal Technology Service (FTS) to utilize an existing blanket purchase agreement (BPA) between SEA and GSA’s FTS, dated August 2000, that was entered into under a GSA FSS contract. Under the BPA, GSA’s FTS issued three consecutive time and materials task orders to SEA to provide services to Energy. The interagency agreement between Energy and SSC NOLA took effect in December 2001 and was scheduled to run for 3 years. Under this arrangement, GSA paid SEA for its services and was reimbursed by SSC NOLA. SSC NOLA received reimbursement from Energy. Energy is the customer and final payer for SEA’s services. SSC NOLA elected to end work under the interagency agreement on September 30, 2004. In this report, we refer to payments to SEA as payments by Energy. In February 2004, Energy began pursuit of a new contract to replace the interagency agreement between Energy and SSC NOLA. However, the new procurement action was not completed by the end of the interagency agreement on September 30, 2004, and Energy issued a time and materials bridge contract directly with SEA beginning October 1, 2004, for a base period of 3 months to continue case development activities and, eventually, assist in terminating and transferring the program. Energy’s direct contract with SEA expired in December 2004. Table 2 provides a description of two contract types used to administer OWA: cost reimbursement and time and materials. OWA utilized two different variations of cost reimbursement contracts: cost plus incentive fee and cost plus award fee. A description, common applications, benefits and risks associated with the contract type, and constraints or requirements for the government are listed for each type. The services provided by SEA were obtained by Energy through a series of agreements. Energy’s interagency agreement with SSC NOLA required SSC NOLA to provide certain services to Energy. SSC NOLA carried out the agreement using an existing BPA between GSA’s FTS and the contractor, SEA. The BPA was entered into under a GSA FSS contract, and an official at GSA’s FTS was the contracting officer (CO) who had authority to contract for goods and services on behalf of the government. Additionally, the CO had overall responsibility for negotiating task orders under the BPA and certifying the contractor’s invoices for payment based on evidence of approval (i.e., receipt and acceptance of goods and services) by the ordering agency. The CO designated representatives of the ordering agency—in this case, SSC NOLA—to be the contracting officer’s representatives (COR). The COR was authorized by the CO to perform specific technical and administrative functions. The COR was responsible for the review and approval of SEA invoices for payment by GSA. Additionally, SSC NOLA was responsible for approval of contractor travel and contract deliverables. Although authority for contract oversight and administration was delegated among multiple agencies, ultimate responsibility for the contract rested with the customer agency (receiving agency), Energy. Although the use of interagency contracting vehicles can be beneficial because the ordering agency does not have to go through an extensive procurement process, interagency agreements must be effectively managed to ensure compliance with the FAR and to protect the government’s interests. When a customer agency’s contracting needs are being handled by another agency, effective internal controls are particularly critical because of the more complex environment. We, along with agency inspectors general, have reported risks associated with interagency contracting. Management of interagency contracting was added to GAO’s high-risk list in January 2005. We found that roles and responsibilities for managing interagency contracts need clarification and agencies need to adopt and implement policies and processes that balance customer service with the need to comply with requirements. Federal requirements for acquiring goods and services through contracts are found in laws and implementing regulations. The FAR prescribes uniform policies and procedures for acquisition by executive agencies. Additionally, agencies may have their own supplemental regulations, policies, and procedures for acquisition. For example, Energy has a supplemental regulation called the Department of Energy Acquisition Regulation, an acquisition guide, an accounting handbook, and other guides that describe its policies regarding contracts, subcontracts, and interagency agreements. On November 29, 2005, the Department of Justice (DOJ) and SEA executed a settlement agreement and release (settlement) after an investigation of allegations of improper billings by SEA of labor charges on work for SSC NOLA and its customers under a GSA FSS contract and two related BPAs covering the period from April 1999 through September 2005. SEA billed SSC NOLA approximately $346 million for labor charges over this period, including approximately $26.6 million under task orders that provided services to Energy. The “covered conduct” investigated by the government related to allegations of improper billing by SEA for labor in two areas: billing indirect labor costs as direct labor costs and billing for employees in labor categories for which they were not qualified. Under the terms of the settlement, SEA paid the government $9.5 million. In turn, the government release provided that the government will have no further civil or administrative monetary claims or cause of action against SEA under the False Claims Act or any other statute creating causes of action for damages or penalties for the submission of false or fraudulent claims, or at common law for fraud or under any other statutes or under theories of payment by mistake, unjust enrichment, or breach of contract, for the covered conduct. In this report, we did not determine whether or to what extent the terms of the settlement may affect any potential additional monetary recoveries by the government for the questionable and improper payments made to SEA that we identified. Internal control is the first line of defense in safeguarding assets and preventing and detecting fraud and errors. Internal control is not one event or activity but a series of actions and activities that occur throughout an entity’s operations on an ongoing basis. It comprises the plans, methods, and procedures used to effectively and efficiently meet missions, goals, and objectives. Internal control is a major part of managing any organization. As required by 31 U.S.C. § 3512(c),(d), commonly referred to as the Federal Managers’ Financial Integrity Act of 1982, the Comptroller General issues standards for internal control in the federal government. These standards provide the overall framework for establishing and maintaining internal control and for identifying and addressing major performance and management challenges and areas at greatest risk of fraud, waste, abuse, and mismanagement. These standards include establishment of a positive control environment that provides discipline and structure as well as the climate that influences the quality of internal control. As we reported in our Executive Guide, Strategies to Manage Improper Payments, a lack of or breakdown in internal control may result in improper payments. Improper payments are a widespread and significant problem in government and include inadvertent errors, such as duplicate payments and miscalculations; payments for unsupported or inadequately supported claims or invoices; payments for services not rendered; and payments resulting from outright fraud and abuse. Energy’s control environment and specific internal control activities over payments to contractors and overall contract costs were not effective in reducing the risk of improper payments. Energy did not establish an effective review and approval process for contractor invoices that enabled it to verify that goods and services billed had actually been received and charged at the agreed-upon amounts. In the case of SEA, much of the responsibility rested with SSC NOLA; however, Energy did not assure itself that these responsibilities were adequately carried out. Further, accountability for equipment purchased and reimbursed by Energy for the program by contractors was not maintained. In addition, Energy and its contracting partners, GSA and SSC NOLA, did not give adequate consideration to subcontractor arrangements, including the extent to which subcontracts were used and what amount contractors were to be paid for subcontractor work. Payments for subcontractor costs represented nearly $15 million. Finally, Energy did not effectively monitor overall contract costs and made errors in reporting total contract costs in its internal and external financial reports. Cumulatively, these weaknesses and the poor control environment made Energy vulnerable to improper payments to contractors and precluded it from effectively managing the overall cost of the contracts. Energy did not establish adequate control activities to ensure an effective process for the review and approval of contractor invoices. Specifically, contractor services were not adequately monitored, labor categories were not verified, and other direct costs were not adequately reviewed. In the case of the largest contract with SEA, SSC NOLA was responsible for review and approval of SEA invoices, but did not adequately perform this function, nor did Energy take steps to assure itself that SSC NOLA was properly carrying out its responsibilities. Further, the review and approval process used by Energy for its contracts did not include the steps necessary to validate the invoices before payment. The FAR, Energy’s accounting handbook, and federal standards for internal control require review and approval of invoices in order to determine if goods and services were actually provided in accordance with contract terms and if invoiced amounts were allowable under regulation or the terms of the contract. Proper invoice review procedures for contractor services call for an effective process to observe and monitor the services provided by contractors and ensure that timely verification of services is provided to the officials approving the invoices for payment. However, neither Energy on its contracts nor SSC NOLA on the SEA contract conducted adequate observations and monitoring of services provided by contractors or linked the observations that were performed to invoices submitted to the government. SEA, Westwood, and Eagle provided services under time and materials task orders. The FAR states that because time and materials contracts provide no positive profit incentive to the contractor for cost control or labor efficiency, appropriate government surveillance (or monitoring) of “contractor performance is required to give reasonable assurance that efficient methods and effective cost controls are being used.” SSC NOLA, as the COR on the SEA contract, was responsible for performing observations of services provided by SEA but did so only sporadically. The SSC NOLA Project Manager, who was located in New Orleans, stated that he made periodic trips to observe SEA services in the Washington, D.C., area. However, we determined based on our review of travel documentation that as much as 6 months passed between his trips, and that no trips were made after February 2004 when SEA more than tripled its workforce in support of OWA. Further, even when the SSC NOLA Project Manager did observe services, he did not systematically link these monitoring activities to the invoice review and approval process. We identified a similar lack of systematic linkage of monitoring activities to the invoice review process for services provided by Eagle. Eagle operated the resource centers supporting EEOICPA activities of both Energy and DOL. Energy provided some evidence of programmatic monitoring and the receipt of quarterly financial information for Eagle’s services, but did not demonstrate how those activities were systematically linked with Energy’s review of Eagle’s monthly invoices. Without such linkage, Energy did not have adequate assurance that amounts billed reflected services actually provided and that they were billed at the correct rates. Energy’s monitoring of services provided under the Westwood contract was also insufficient, as follows. Physicians serving on physician panels were retained by Westwood as independent contractors. These physicians reviewed cases at their homes or at Energy headquarters and submitted invoices or time sheets to Westwood for the hours worked. Neither Energy nor Westwood had an effective mechanism in place to assess the reasonableness of the hours billed by these physicians, which totaled over $3 million. Our review of selected physician panel invoices found that one physician reported working as many as 19 hours in a day, and these hours were not questioned by Westwood or Energy. In another example, a physician regularly billed significantly more than 173 hours a month—the average number of working hours a month based upon working 5 days a week and 8 hours a day. This physician billed 265 hours in March 2004, 210 in April 2004, 335 in May 2004, and 252 in June 2004. Westwood provided some evidence—a variety of metrics—that it considered the productivity of the physicians, such as reports that summarized hours needed to review each case, and quality metrics, such as decisions overturned and cases returned because of clerical errors. However, this approach was not effective in assessing the reasonableness of the hours billed. In fact, the productivity measures were developed based on the hours actually billed on the invoices submitted by the physicians, and therefore Energy had no independent baseline with which to measure productivity or to assess the reasonableness of hours billed on the invoices submitted by the physicians. Four doctors who performed quality checks before the claims were submitted to the physician panels and again after the physician panels had made recommendations were also not sufficiently monitored. Three of the four doctors we interviewed told us that they worked independently or with only limited monitoring or supervision by Westwood. The doctors told us that they did interact with Energy technical personnel; however, these technical personnel were not involved in Energy’s invoice review and approval process. The doctors submitted their invoices or other records of time worked to Westwood for payment, and Westwood then billed the government for these charges. These physicians regularly billed for 9 to 12 hours per day and as high as 18 hours per day, yet there was no evidence that these charges were validated by Westwood or questioned by Energy. Energy told us that it was aware that these doctors worked long hours. However, Energy did not systematically observe the hours worked and then compare any observations to the amounts paid for those hours, nor did it determine that Westwood was adequately monitoring these services as a basis for its billings. For SEA task orders, SSC NOLA did not take appropriate steps to verify that labor hours were being billed at the appropriate rates or to determine that employees were qualified under the labor category education and experience requirements negotiated in its contracts. Further, Energy did not take steps to ensure that SSC NOLA implemented appropriate verification procedures or effective compensating control strategies. Appropriate procedures to verify labor hours may include sampling on a test or periodic basis résumés of contractor employees, including independent verification of education and work experience to requirements under the contract or detailed evaluations of labor categories at higher risk because of volume or price per hour. In certain cases, we found that the labor categories negotiated in the contract did not reflect the actual tasks being performed, making it difficult to determine whether the labor charges were based on appropriate rates. We found that the labor categories in the contract were originally designed for information technology activities and did not reflect labor categories appropriate for the significant case development activities SEA performed in the last 2 of 3 years of SEA’s task orders. While Energy provided us with a crosswalk of the information technology labor categories that SEA used for billing purposes to case processing job titles under the third task order, this crosswalk was not used by SSC NOLA in order to review SEA’s billings. Further, the underlying BPA was not amended to reflect labor categories that matched the case development activities that SEA provided. We found similar problems with another contractor, Westwood. The statement of work underlying the Westwood task orders from August 2001 through February 2005 provided for nine activities “supporting the Advisory Committee.” However, Westwood performed the following additional activities that were significant to OWA in terms of nature and amount but were never incorporated into Westwood’s statement of work: Implementing physician panels, which included retaining doctors and coordinating the flow of cases between panel members. Providing medical doctors who performed quality checks before the claims were submitted to the physician panels and again after the physician panels had made recommendations. Obtaining consulting services at the request of Energy, including advisors on environmental health issues and process improvements. Since the contract did not fully reflect actual duties that were subsequently performed, Energy did not have an adequate basis on which to determine if amounts billed for labor were appropriate and consistent with the contract terms. Neither Energy for the Westwood contract nor SSC NOLA for the SEA contract performed a sufficient review of other direct costs billed under the contracts. Energy did not require Westwood to report a detailed breakdown of its other direct costs, such as travel and materials, as stipulated by its contract and did not request Westwood to submit supporting documentation for these costs except on a sporadic basis because, according to Energy, the amount of supporting documentation was “too voluminous.” Westwood billed Energy for approximately $11.6 million of goods and services provided from August 2001 through February 2005 in support of OWA, of which approximately $5.2 million was for other direct costs. As shown in figure 2, the amount of Westwood’s other direct costs was significant to its monthly billings but was not adequately described on the invoice. Our review of the invoice documentation Energy did request and receive for one monthly invoice identified costs that should have been questioned and investigated by Energy prior to payment, but were not. In addition, we examined the supporting documentation that was available for other Westwood invoices (a majority of which Energy did not request or review prior to payment) and identified numerous charges improperly paid by Energy. These findings are discussed later in the report. SSC NOLA, in its role as COR and project manager on the SEA task orders, did not sufficiently review travel costs incurred by SEA. SSC NOLA preapproved travel when it determined the travel met a need of the program and then subsequently reviewed and approved the travel voucher, including all receipts submitted, after the travel had occurred. The COR also verified that travel had been preapproved, travel corresponded with the preapproved dates and location, and the amounts did not exceed the preapproved estimates. However, SSC NOLA did not question whether the costs actually incurred for airfare were reasonable and appropriate. In particular, we found instances of first-class travel and other excessive airfare costs that were not identified or questioned by SSC NOLA. For example, our analysis of the historical data supporting SEA’s travel for OWA activities on its most frequently flown route (New Orleans to Ronald Reagan Washington National Airport) showed airfares as high as $1,482 for first-class travel and as low as $362 for coach class. SSC NOLA officials indicated that in the future they would review contractor travel costs more closely, including adding new procedures to verify that contractor travel complied with the applicable travel regulations regarding first-class travel. Energy did not have sufficient controls over the equipment, such as computers, laptops, and copying machines, purchased by its contractors for the program. The equipment, totaling nearly $1 million, ranged from a $160 printer to a $17,742 copying machine. Any equipment purchased by a contractor and for which the government holds the title is considered government-owned property. Maintaining accountability over assets calls for procedures to approve equipment purchases prior to purchase, steps to ensure the contractors received and safeguarded the assets during the operation of the program, and conducting timely inventories of equipment it received from each contractor at the conclusion of the program. However, Energy did not have adequate procedures in place to properly account for equipment purchased by its contractors nor did it work with SSC NOLA to ensure adequate monitoring of SEA-purchased equipment. Specifically, Energy did not have a formal process to approve Westwood equipment purchases prior to purchase. Additionally, Energy did not take steps to ensure the contractor maintained accountability over equipment while it was in its possession. Further, physical inventories of Westwood and SEA purchased equipment were not completed until at least 8 months following the expiration of the respective contracts. Our analysis of documentation supporting Westwood’s invoices from January 2002 through February 2005 found that Westwood purchased over 70 pieces of computer and computer-related items costing approximately $62,000 and was subsequently reimbursed by Energy. Energy, however, did not conduct an inventory of that equipment until December 2005, nearly 9 months after Westwood’s contract expired. Further, since Energy had not previously obtained supporting documentation for Westwood’s equipment purchases, Energy relied on Westwood to provide it with a listing of all items purchased. During its inventory, Energy identified 13 missing items. Our comparison of the inventory to Westwood’s billings for the equipment, however, identified an additional 31 items that Westwood had not included on its listing that also needed to be accounted for. Finally, we identified over $31,000 in computer purchases that did not contain sufficient supporting detail, such as a description of the items, serial numbers, or model numbers, to be used to determine if the items were accountable assets and, if so, if they were included on the inventory list. In response to our inquiries, Energy made an effort to locate these additional items and has indicated that several items have been found. Energy’s and its contractor’s lack of accountability for the equipment over an extended period put this equipment at risk of loss or misappropriation without detection. Energy did not consistently obtain and review subcontract arrangements or adequately consider the billing implications of the extensive use of subcontracts by its prime contractors. Nearly $15 million of $92 million in OWA program costs were incurred by subcontractors. However, neither Energy nor SSC NOLA for SEA exercised sufficient management oversight to be fully informed of the nature, extent, scope of services, and terms of billings to the government for these services as well as the oversight the prime contractor was to exercise over its subcontractors. Our review of the subcontracting arrangements used by SEA and TDI identified numerous subcontracting issues that were not addressed by Energy or, in the case of SEA, by SSC NOLA or GSA. Of the $29 million in labor billings by SEA, $10.1 million was provided by subcontractors, including temporary staffing agencies. While Energy, SSC NOLA, and GSA were aware that SEA utilized subcontracted labor, GSA’s initial consideration of the use of subcontractors was given in 2000 as part of SEA’s proposal under the BPA more than a year before the Energy task orders and was not updated to reflect changes in SEA’s business partners or the scope of work provided to Energy over time. To illustrate, SEA utilized 16 subcontractors to provide services to Energy, but only 5 of those subcontractors, representing approximately 6 percent of total billings for subcontractor services, were included in SEA’s proposal. Further, there was no evidence that either SSC NOLA or GSA had been informed of the extent to which SEA used subcontractors to provide OWA services or the amount SEA paid for those services. SSC NOLA told us that it was concerned that SEA did not separately identify the amount of charges associated with subcontracted labor from other labor charges, but said that GSA officials told it such a breakout was not necessary. SSC NOLA did not pursue the issue again with either GSA or SEA. Additionally, TDI billed Energy for services provided by Westwood from February 2002 through September 2004 under an arrangement that TDI and Westwood viewed as a prime contractor and subcontractor relationship. However, we found that an agreement between TDI and Westwood containing basic information, such as hourly billing rates by labor category, allowable costs, and other basic terms and conditions for the period Westwood provided services did not exist. Further, while Energy’s CO told us he obtained and reviewed a price proposal submitted by Westwood for this period, Energy did not take the appropriate steps to ensure the prices were formalized into TDI’s prime contract with Energy or any other binding agreement. Without an effective contractual agreement, including negotiated rates, it was not possible for Energy to adequately review the amounts TDI billed for costs attributed to Westwood. Energy did not establish internal control monitoring practices to effectively manage overall contract costs, including using contract ceilings to manage and encourage cost-effectiveness. Further, Energy did not accurately report contract costs in internal and external financial reports. We identified instances of improper cost assignments between Energy programs and a payment error that understated the program’s costs by $2.5 million. This amount includes a processing error of $1.7 million we identified during our review that had not been previously identified by Energy. Energy failed to monitor cumulative contract costs adequately. Ceilings, or caps, on total contract values and on certain contract components, such as other direct costs, impose limits that help the government manage contract costs. Contract ceilings are particularly valuable tools for monitoring time and materials contracts, which have few other mechanisms for managing cost-effectiveness. Our review of contract and interagency agreement ceilings for the four major OWA contractors showed that the ceiling amounts of certain contracts were increased numerous times. For example, the amount for Westwood’s total contract ceiling was modified six times, including four times during the last 9 months of the contract. However, Westwood still exceeded the cost ceiling for other direct costs by nearly $2 million by the end of the contract. Energy paid these amounts, thereby reducing the value of the contract ceiling and further demonstrating Energy’s lack of a proper control structure to manage contract costs. Energy also did not properly track and report contract costs in internal and external financial reports. Energy improperly assigned some costs of OWA activities to other program reporting units and, in some cases, assigned the costs of other program reporting units to OWA. For example, Energy improperly assigned the costs of OWA services provided by Westwood to other program reporting units, in effect using other programs’ funds to pay for OWA activities. This occurred because Energy did not assign the costs of the invoice according to services provided to each program, but instead either divided the total cost of the invoice equally across all programs receiving services or assigned costs based upon the amount of funds available in the different program reporting units. At the end of Westwood’s first contract, $1.6 million of costs associated with OWA activities were assigned to other program reporting units, understating the OWA program costs. Conversely, Energy, using similar methods, improperly used $2.1 million of OWA funds to pay TDI costs through its second contract that were unrelated to OWA activities, overstating the OWA program costs. Assigning costs on a basis other than the actual cost of services not only misstates program costs but also hinders the agency’s ability to adhere to federal cost accounting standards. In addition, Energy used $1.3 million of funds from two other Energy program reporting units to pay for SEA services in fiscal years 2003 and 2004. Although the amount transferred was authorized by senior Energy management, it was not reported externally in Energy’s September 30, 2004, report to Congress on EEOICPA expenditures. As a result, the cost report was understated by $1.3 million. We identified a total of $5.0 million (gross) in cost assignment errors and reporting omissions. These errors, which were partially offsetting and resulted in a net understatement of OWA program costs of $800,000, prevented the agency and other interested stakeholders from knowing the true cost of program activities at any given time. We further identified a $1.7 million payment error related to SEA billings that occurred in December 2004. GSA paid SEA for its services and was reimbursed by SSC NOLA. SSC NOLA then received a reimbursement from Energy through the intragovernmental payment process, but was not reimbursed for the full amount owed it because of a processing error. Neither SSC NOLA nor Energy identified the mistake. The error went undetected by Energy because it did not reconcile reimbursements made to SSC NOLA to appropriate supporting documentation in accordance with Energy accounting policy. The error understated the OWA’s program costs until it was corrected in September 2005 after we brought it to the attention of the Defense Finance and Accounting Service, the Department of Defense unit that handled SSC NOLA’s intragovernmental payment transactions. The fundamental internal control weaknesses associated with Energy’s contract payment process contributed to $26.4 million in improper and questionable payments to contractors that we identified as part of our review. We employed a variety of forensic auditing techniques to assess the validity of Energy payments for OWA activities and identified $24.4 million in improper and questionable payments to contractors for direct labor billed under improper labor categories and the inappropriate use of fully burdened labor rates. We also identified $778,613 in improper and questionable payments for other direct costs, including amounts for add- ons and base fees, and certain travel and related costs. Further, we questioned whether certain other payments toward the end of the program for furniture and computer equipment, totaling nearly $1.2 million, were an efficient use of government funds. Given Energy’s poor control environment and the fact that we only reviewed selected Energy payments, other improper and questionable payments may have been made that have not been identified. Table 3 includes the net amount of improper and questionable payments when we could determine a net amount. We use the gross amounts paid by Energy when it was not practical for us to determine offsets or reductions that might be due to the contractors in lieu of the amounts that Energy paid. Any potentially recoverable amounts would need to be determined after consideration of any reductions or offsets. The following sections provide additional information on the improper and questionable payments we identified. A significant portion of OWA program expenditures was for labor provided by contractors and their subcontractors. For the four major contractors discussed in this report, Energy paid $45.3 million for contracted and subcontracted labor, representing approximately 49 percent of total OWA program costs reported by Energy. In light of Energy’s weak controls over labor category requirements and insufficient observation and monitoring of contracted services, we performed a variety of tests on the amounts billed for labor. Our tests disclosed that certain contractors used inappropriate labor categories for billing purposes, and as a result the government made improper payments for those charges. We also found that some labor billings could not be validated because of insufficient criteria for labor category qualifications but were paid nonetheless. Additionally, Energy and SSC NOLA paid prime contractors for subcontractor labor at fully burdened labor rates instead of paying only the costs incurred by the prime contractor and also paid time and a half for certain hours worked beyond a standard 40-hour week, which was not in accordance with the contract. Westwood billed over half a million dollars of labor charges under labor categories for which the employees were not qualified to be billed. We reviewed résumés for 25 Westwood employees whose time was billed to OWA. Our comparison of employee résumés to the qualifications that were required under Westwood’s contract revealed that Westwood billed for 7 employees under labor categories and at billing rates for which the employees were not qualified, resulting in $602,000 of improper payments by Energy. For example, the analyst labor category required a college degree and at least 5 years of experience in a specific field, such as health or physical sciences or environmental studies. However, the employees we reviewed who were billed as analysts did not have college degrees or did not have the necessary years of experience. Westwood’s Project Manager told us that he was unfamiliar with the minimum qualifications negotiated under the contract. Further, Westwood management officials had not previously compared the employees’ qualifications to the requirements listed in the contract, but they told us they have since taken steps to screen applicant qualifications. We also identified $1.9 million of improper payments to SEA that resulted from the use of inappropriate labor categories by SEA. Using data mining and other forensic auditing techniques, we selected 94 individuals directly billed by SEA and requested their personnel files in order to compare education and experience qualifications to what the contract required. Because personnel are often billed under more than one labor category under the contract, the personnel files we requested represented 187 comparisons. However, as discussed later, we were only able to make 87 comparisons. For these 87, we identified the following instances of labor costs billed under inappropriate labor categories, which resulted in improper payments by Energy. SEA billed approximately $970,930 under labor categories that did not reflect actual duties performed. SEA had three consecutive program managers who functioned as the project lead and were the main liaisons between Energy and SEA officials. Yet these three managers were not billed to the government under the program manager (average billing rate of $106/hour) or project manager (average billing rate of $117/hour) labor rates but rather as subject matter experts, which were billed at an average rate of $205/hour. Project and program managers, according to the labor descriptions under the contract, generally required the ability to manage contract support operations, including organizing and planning activities. On the other hand, a subject matter expert provides assistance in “enhancing the alignment of Information Technology strategy with business strategy” and “evaluates expectations for and capabilities for the information management organization.” In November 2005, we asked SEA officials why these project leads were not billed under the less costly project or program manager labor categories, but they offered no viable explanation. On March 23, 2006, counsel to SEA told us that they disagreed with our view that these were improper payments because the three individuals’ “ability to manage was informed and enhanced by their expertise in engineering and information technology.” Further, counsel to SEA said that “given their extensive expertise in their fields, it seems appropriate for SEA to have billed these senior personnel as subject matter experts.” We disagree and find no basis for the government to have paid more for program manager labor than the agreed rate for that labor category. SEA also billed and Energy paid $649,182 for services provided by four employees who were not qualified for the labor category under which they were billed. Two employees were billed as systems engineers (average billing rate of $85/hour) who did not meet the minimum 5 years programming experience. They had 3 years or less of general computer experience. A third employee did not have the years of experience necessary to be billed as a case management technician, which required a minimum number of years of medical records experience. The fourth employee was billed as a senior computer scientist at an average billing rate of $117/hour, but the documentation maintained in the employee’s file did not provide adequate evidence that the employee met the minimum 5 years of programming experience. Charges for two other SEA employees, totaling $276,808, were billed as graphics illustrators, although the job descriptions for these employees indicate that they performed administrative support services, for example, project scheduling, support activities, and making travel arrangements and preparing travel-related paperwork. We found no basis for these employees to be billed as graphics illustrators at an average billing rate of $52/hour. Further, because general and administrative costs, such as those associated with the administrative duties performed by these two employees, are recoverable through a component of the fully burdened labor rates used under the time and materials task orders, the costs associated with these two administrative employees may be duplicative. Of the 187 total comparisons we initially planned to make, 72 comparisons were not possible because certain labor category descriptions negotiated for use under the BPA lacked sufficient criteria for assessing whether a person was qualified to be billed at that labor category. In total, we identified about one-third of the labor categories used by SEA, representing $15.6 million in questionable payments by Energy, that did not include sufficiently explicit descriptions of the requirements and duties of the position for us to assess the appropriateness of the labor amounts billed for these labor categories. For example, the description for senior management analyst listed desirable skills and knowledge in the areas of business and mathematics, for instance, but did not list education or years of experience requirements. SEA billed $7.2 million, at an average hourly billing rate of $90/hour, under that labor category. We found that billings in this labor category included amounts for case processors (who generally were registered nurses or had medical backgrounds) and records management personnel (who generally had degrees in business or records management). Based upon discussions with both SEA and SSC NOLA officials and our review of the BPA and underlying GSA schedule contract, we found that labor categories reflecting the necessary duties, education, and skills Energy required for the work performed did not exist under the GSA contract, which was originally let solely for information technology activities. Instead, SEA used labor categories that “best fit” the work performed and that had what SEA considered to be an appropriate billing rate for the services provided. GSA as the contracting officer did not amend the contract to align labor category descriptions with the needs of the government. While Energy developed a crosswalk of labor categories to case processing job titles, this crosswalk did not specify skills or education qualifications that would supplement those originally provided for under SEA’s contract. Further, an additional 28 labor category comparisons could not be made because SEA either did not obtain or had not retained résumés and other documents that evidenced independent validation by SEA (or confirmation of validations performed by others) of employee skills, work experience, and education requirements for personnel it obtained through temporary hiring agencies or other subcontractors. The 28 comparisons we made in our review represented a portion of the $10.1 million in subcontracted labor charged the government. After removing other improper and questionable amounts noted elsewhere in this report to prevent double- counting, we consider $2.1 million of subcontracted labor billings to be unsupported and therefore questionable payments by Energy. Without this information, it would not be possible for us or others to determine if these temporary personnel were billed under appropriate labor categories and at appropriate billing rates. SEA and Westwood used fully burdened labor rates that included base wages plus fringe benefits, overhead costs, and profit to bill for subcontracted labor but had no basis to do so under their contracts. This practice resulted in over $4 million in “markups” on subcontracted labor charges that were paid by Energy. Based on our analysis, Energy should have paid only incurred costs for the subcontracted labor, which represented amounts paid by the prime contractors for labor obtained from temporary staffing agencies, other subcontractors, or independent contractors. The following is a discussion of SEA and Westwood billing practices and the resulting improper and questionable payments by SSC NOLA and ultimately Energy. Over 40 percent of SEA’s direct labor hours were provided by labor obtained under arrangements with temporary staffing agencies. In total, from December 2001 through December 2004, SEA paid subcontractors, including temporary staffing agencies, $6.86 million for the services. Instead of billing the government for this amount, SEA billed the government $10.08 million for these services under fully burdened labor rates, resulting in a markup of $3.22 million and improper and questionable payments by SSC NOLA and ultimately Energy. Energy inappropriately paid $7.12 million for work costing SEA $4.47 million that was not contemplated at the time the labor rate negotiations occurred. As previously noted in the background section, the GSA Inspector General reviewed the three Energy task orders for SEA services and reported that the case development activities performed by temporary staffing agencies under the second and third task orders were outside the scope of the underlying GSA FSS contract and a misuse of the contract vehicle that was designed for information technology services. These two task orders represented approximately 83 percent of total SEA services provided to Energy. Because the services were outside the scope of the underlying FSS contract, there was no basis for SEA to bill for the subcontracted services at other than cost. The contracting officer may add items not on the FSS only if all applicable FAR requirements are followed. These requirements include publicizing the government’s proposed contract action (FAR part 5), complying with the full and open competition requirements (FAR part 6), and meeting the source selection requirements (FAR part 15). In addition, the contracting officer should determine that the price of the items or services not on the underlying GSA schedule contract—here, case processing activities performed by labor obtained through temporary staffing agencies—is fair and reasonable. None of these requirements were satisfied for the $7.12 million of payments to SEA. Given that the FAR requirements were not met for this out-of-scope work, the schedule rates were inapplicable. Accordingly, the $2.65 million markup of subcontractor rates over cost was not properly supported and was improper. SEA also billed for subcontracted services that were within the scope of work of the underlying contracts and task orders but may have been inappropriately paid by Energy using fully burdened labor rates. The time and material payment clause included in the SEA contracts, FAR 52.232-7, states that “the Government will limit reimbursable costs in connection with subcontracts to the amounts paid for supplies and services purchased directly for the contract.” SEA paid $2.39 million for the subcontracted labor under these agreements but billed the government $2.96 million for a markup over cost of $569,798. There are currently differing views in the contracting community (including government agencies) regarding how the payment clause is to be applied by contracting agencies when paying contractors for services provided by their subcontractors when the contract is otherwise silent on this matter. The clause provides that based on invoices or vouchers approved by the CO, the contractor will be paid an hourly rate amount “computed by multiplying the appropriate hourly rates prescribed in the Schedule by the number of direct labor hours performed. The rates shall include wages, indirect costs, general and administrative expenses, and profit.” The clause also provides that reimbursements to contractors for subcontractor services shall be limited “to the amounts paid.” The view of some in the federal contracting community is that prime contractors are to be paid for subcontractor labor based on the approved fully burdened labor hour rates as if the prime contractor provided the services directly through its employees, since these are the rates the government agreed to pay for each labor category. Another view is that contractors are to be reimbursed only for what they pay their subcontractor for services. An amendment to the FAR has been proposed that attempts to clarify the application of the clause. For the purpose of this report, we have identified payments in the amount of $569,798 as questionable based on the literal application of this clause with regard to reimbursement to contractors for subcontractor services. Westwood paid four independent contractors $2.23 million for services provided, yet billed Energy $3.24 million using fully burdened labor rates for a markup of $1.01 million. On October 21, 2002, Energy modified its contract with Westwood to provide for a new labor category, senior scientist, to be billed at a fully burdened labor rate of $250/hour. The senior scientists were doctors who performed quality assurance review checks over the case files before and after the files entered physician panel review. Energy officials advised us that they negotiated this labor category and the high hourly rate in order to ensure that they had full access to the medical specialists necessary to meet the increased case-processing demands of the program. Despite this fact, and that Westwood, in its cost justification to add the senior scientist labor category, indicated that the senior scientists would be added as employees, Westwood engaged them to work as independent contractors. When we inquired about the employment status of the senior scientists, Westwood’s President told us they were full-time employees. However, the documents we reviewed, including written agreements between the senior scientists and Westwood, showed that Westwood engaged the senior scientists as independent contractors at hourly rates ranging from $110/hour to $200/hour and they were ineligible to participate in benefit packages. In response to our request for Internal Revenue Service (IRS) Form W-2, Wage and Tax Statements, for the senior scientists, Westwood provided us instead with IRS Form 1099—MISC. Form 1099 is used to report amounts paid to independent contractors, not employees. Because Westwood engaged these personnel contrary to the negotiations with Energy and the terms of the contract, Westwood inappropriately billed the government, and Energy improperly paid a $1.01 million markup for these services. Westwood billed and Energy improperly paid for hours beyond a standard workweek at one and a half times the billing rate for the labor category under the contract. Under its time and materials task orders, hours worked were to be billed under the labor rates negotiated in the contract, and no provision was made for overtime rates. Over the 4 years Westwood provided services to OWA, it billed Energy for 168 hours at time and a half for an incremental difference over the regular labor rate of $3,019. Westwood’s President told us that the overtime payments were verbally approved and allowed by Energy, which was evidenced by Energy’s approval of Westwood invoices that clearly showed the number of hours billed at time and a half. However, Westwood’s contractual agreement was not modified to reflect this approval. Other direct costs, such as travel, purchases of equipment, and add-on rates and base fees, for the four major contractors in this report totaled approximately $10 million, or 11 percent, of total OWA program costs reported by Energy. These costs are subject to a variety of terms and conditions contained in the contracts and in the FAR. For example, allowable fees are negotiated specifically for each contract. Also, the contracts may have incorporated either the Federal Travel Regulation (FTR) or the Joint Travel Regulations used by the Department of Defense, which define allowable travel costs. For time and materials contracts, FAR 16.601(a)(2) and (b)(2) limit other direct costs to those separately identifiable from costs included in its fully burdened labor rate. Because of the weaknesses in Energy’s invoice review process identified in this report, specifically the weaknesses related to Westwood’s invoices, and the significant amounts of other direct costs billed to the government by both SEA and Westwood, we obtained and reviewed the supporting documentation for selected other direct costs these two contractors billed the government. We also analyzed the fees TDI billed the government on its invoices. We identified the following questionable and improper payments. Energy made $557,429 in improper payments to Westwood for amounts the contractor added to billings for other direct costs in the form of a 12 percent add-on rate. However, there is no provision in the contract to justify such a charge. The contract provides that other direct costs were not to exceed $120,000 in the base year, and did not provide for additional amounts, such as fees, profits, or add-on rates. Additionally, when determining the “best value” among the proposals provided in response to Energy’s solicitation for the work, Energy deemed Westwood’s proposal of a flat amount for other direct costs (with no add-on rates) to be in conformance with the solicitation while a competitor’s addition of an add- on rate for general and administrative expenses was deemed contrary to the solicitation. Energy also improperly paid $98,305 in base fees to TDI. The base fee is negotiated up front by Energy and TDI. It was calculated based on the level of effort provided under the contract and was limited to 3 percent of the estimated cost ($2,761,581) for a total of $82,847. The contract did not distinguish between the level of effort provided by TDI and any other contractor that TDI viewed as its subcontractor, including Westwood. Westwood, in addition to its previously discussed prime contract with Energy, provided services through TDI in what TDI and Westwood viewed as a prime–subcontractor relationship. From March 2002 through September 2004, TDI billed Energy for $181,152 in base fees that according to TDI, represented base fees for both TDI and Westwood. Because the base fee under the prime contract was limited to $82,847, TDI overbilled and Energy improperly paid $98,305. Energy paid $12,418 for per diem and commuting costs billed by Westwood related to the physician panels that were not allowed under Westwood’s task orders, which incorporated the FTR. For example, Energy improperly paid Westwood for per diem and commuting expenses of physicians who lived in the local area and per diem to out-of-town physicians for days that their own time records showed they did not work (sometimes weeks at a time). We considered an additional $4,704 in payments to be questionable because they were not properly supported in order to determine whether the amounts billed were in accordance with the FTR. Energy also made $14,326 in improper and questionable payments for first- class airfare purchased by SEA. First-class airfare is prohibited by SEA’s task orders that incorporate the Joint Travel Regulations except under certain circumstances, and those circumstances must be clearly documented in the travel voucher. Energy improperly paid $5,207 in airfare when at least one leg of the trip was first class and was not justified in the travel documentation supporting the trip. We also questioned payments of an additional $9,119 in first-class airfare. The travel documentation supporting these airfare costs contained some explanation for the use of first class generally related to availability. For example, one traveler noted “only first class available.” However, the travel regulations state that travelers should determine travel requirements in sufficient time to reserve and use coach accommodations. Therefore, we question whether the travelers’ justifications were sufficient under the terms of the contract. We identified $91,431 in other miscellaneous payments that Energy improperly made to Westwood and TDI. Of this amount, $45,631 was made for duplicate and erroneous billings from Westwood. In one case, we found that Westwood billed the government multiple times for the same cost for a physician serving on review panels. The duplicate amounts for this one physician equaled $28,783. Westwood also billed Energy for a plane ticket that was never used and subsequently credited back to Westwood ($643) by the airline and therefore should not have billed to Energy. These duplicate and erroneous billings were likely not identified prior to payment because Energy did not regularly obtain documentation supporting Westwood’s invoices or sufficiently review the documentation it had received. Additionally, TDI billed the government twice for work provided by its subcontractor, Westwood, during the month of April 2004 instead of billing the subcontractor’s April and May 2004 invoices. The subcontractor’s April invoice included more costs than its May invoice; therefore, Energy improperly paid an incremental amount of $19,277. Energy also improperly paid at least $26,523 in other costs billed by Westwood that were not permitted under the terms of the contract and the FAR. The payments included $21,172 for monthly phone bills, $4,603 for staff parking permits, and $748 for water cooler rentals. We identified $1,162,919 in purchases of furniture and equipment and related storage costs that may not have been an efficient use of government funds given that Congress was giving consideration to transferring responsibility for the program to another agency. As part of our review of overall program costs, we noted a significant increase in program costs during the last 6 months of the program beginning in July 2004. For example, the amount of SEA’s invoices increased approximately 87 percent from a monthly average of $1.2 million for the 6 months prior to July 2004 to $2.3 million for the following 3 months. The increase in program spending followed a June 2004 transfer of $21.2 million to OWA. According to Energy officials in a March 2004 testimony before the Senate Committee on Energy and Natural Resources, Energy transferred the funds in part to reduce the backlog of unprocessed applications by increasing the number of case developers and assistants as well as the number of physicians serving on the review panels. In July 2004, Energy ordered $748,409 of modular furniture that was to be installed in new work space to be occupied by claims processing personnel provided by SEA. However, by August 2004, OWA had initiated a hiring freeze. According to the program manager at the time, Energy was unable to cancel the furniture order and the furniture was received in September 2004. Energy paid $6,060 a month through fiscal year 2005 to store the furniture at a storage facility, incurring costs of $72,720 for 12 months. (See fig. 3.) We noted that Energy prepaid the manufacturer $50,000 of installation charges in 2004 even though the furniture was in storage for 12 months and not installed until February 2006, over a year and a half later, for use by another Energy program. Total costs associated with the furniture were $821,129, which we have classified as a questionable use of government funds. During this period of increased program spending, SEA more than tripled its workforce supporting OWA at the direction of Energy. To equip these personnel, SEA purchased and subsequently billed the government for 200 desktop computers, 5 laptop computers, 6 industrial copiers, and 4 fax machines at a cost of $341,790. This equipment was ordered from June 21, 2004, through July 27, 2004, and SEA received all items by August 2004. Because the program ceased new case processing and SEA began downsizing its staff in November 2004, SEA only used these items in support of the program for at most 5 months. According to Energy officials, Energy took possession of the equipment from the contractor when SEA’s contract ended on December 31, 2004. At the time of our inquiry nearly 8 months later, however, 134 items, with a cost of $241,725, were still unused and located in storage rooms at Energy or could not be located. The questionable and improper payments we identified during our review represent nearly 30 percent of total program funds spent through September 30, 2005. Given the lack of fundamental internal control over the payment, monitoring, and reporting of contractor costs, and the fact that we did not review all program payments, the amount of improper and questionable payments could be even greater. Further, the control weaknesses at Energy and SSC NOLA could be indicative of more systemic problems at both organizations that could put other program funds at risk. Correcting these problems will require a major reassessment of existing practices, policies, and procedures and the overall control environment. The success of this effort will depend on the level of commitment by senior management in setting the “tone at the top” and working proactively to see that the needed changes are effectively implemented. We are making 16 recommendations to address the issues identified in this report. We are making 14 recommendations to Energy to (1) improve controls over the review and approval process for contractor invoices; (2) strengthen accountability for government-owned equipment purchased by contractors; (3) improve reporting and control of overall contract costs, including subcontractor costs; and (4) pursue opportunities for recovery of improper and questionable costs identified in this report. We are also making 2 recommendations to SSC NOLA to reassess its procedures for carrying out its responsibilities for delegated contract administration in connection with interagency agreements. To improve Energy’s controls over its review and approval process for contractor invoices, we recommend that the Secretary of Energy instruct the Deputy Secretary to: Develop an assessment process to use as a basis for determining reliance on and monitoring the performance of other federal agencies that perform key contract management functions on Energy’s behalf, such as monitoring contractor services, review and approval of invoices, and approval of subcontractor agreements. Establish policies and procedures for an effective review and approval process for contractor invoices, including (1) conducting and documenting observations (surveillance) of services provided by contractors, (2) linking those observations to the invoice review and approval process, (3) verifying labor hours are billed at appropriate rates and that employees are qualified to perform the work consistent with the terms of the underlying agreement, and (4) ensuring other direct costs are properly supported and reviewed prior to payment. Develop guidance for CORs or other payment/review officials that detail appropriate steps for review and approval of invoices and appropriate documentation of that review process. Require timely and periodic reviews of contractual agreements, especially time and materials contracts or task orders, including the statements of work, to ensure that agreements continue to reflect both the work that is being performed and the needs of the agency. To strengthen Energy’s accountability for contractor-acquired government property, we recommend that the Secretary of Energy instruct the Deputy Secretary to establish or reinforce existing policies and procedures to: Approve contractor equipment purchases prior to purchase. Verify that contractors receive and safeguard the assets during the operation of the program, including physical inventories or some other process to validate that all assets paid for are accounted for. Timely conduct physical inventories of contractor-acquired government property upon taking possession of the equipment at the close of contract, including resolving with the contractor any missing or defective items. To improve Energy’s reporting and control of time and materials and cost reimbursement contract costs, including subcontractor costs, we recommend that the Secretary of Energy instruct the Deputy Secretary to establish or reinforce existing policies and procedures to: Review subcontracts, including those for labor obtained through temporary staffing agencies. Require contractors to obtain formal approval in advance for significant new subcontract agreements or changes to existing subcontract agreements, such as significant changes in the nature, scope, or amount of subcontracted activities, including labor obtained through temporary staffing agencies. Systematically monitor overall contract costs and require documented justifications from contractors for increased ceiling amounts, as well as specific documentation to support Energy’s approval of the increases before incurrence of any costs beyond the ceiling. Properly assign costs incurred to the correct program at the time of payment and accurately report such costs in internal and external financial reports. Reinforce requirements for reconciliation of intragovernmental payments for amounts due under interagency agreements to appropriate supporting documentation. To pursue opportunities for recovery of improper or questionable costs identified in this report, we recommend that the Secretary of Energy in coordination with the Administrator of General Services and, in relation to SEA costs, the Commanding Officer, Space and Naval Warfare Systems Center, New Orleans, to: Determine, in consultation with DOJ, the amount, if any, of potentially recoverable costs associated with the improper and questionable payments for labor associated with SEA that we identified in this report in light of the settlement agreement dated November 2005. Determine whether the other improper and questionable payments of contractor costs, including payments to SEA that are identified in this report, should be reimbursed to Energy by any contractor. In light of the findings in this report, we recommend that the Commanding Officer, Space and Naval Warfare Systems Center, New Orleans, reassess the organization’s procedures for carrying out its delegated responsibilities in connection with interagency agreements for delegated contract administration responsibilities, including the following: Assess the adequacy of the review and approval process for contractor invoices, including (1) oversight and monitoring of contractor services and linkage of these activities to the invoice review and approval process, (2) verification that labor hours are billed in the appropriate categories at the appropriate rates, and (3) determining that contractor travel and other direct costs are in accordance with the contract and applicable federal regulations. Establish policies to document guidance sought from GSA and the direction received on all matters of substance, including the use and billing implications of subcontracted labor, and to communicate the direction provided by GSA to the customer agency (e.g., Energy) for consideration by the customer. In the letter transmitting its detailed written comments on a draft of this report, Energy stated that it agreed with the spirit and intent of our recommendations and that it will give careful consideration to each of them. Energy also said it would revise its current policies or procedures as appropriate and described corrective actions it had already undertaken to improve its controls, including those over interagency contracting. In its detailed comments, Energy agreed with some of our findings and disagreed with others without specifically commenting on any of the 16 recommendations, including the 14 directed to Energy. In particular, Energy (1) disagreed with our view that it was ultimately responsible for the issues that we identified relating to payments and controls for SEA, a contractor obtained through an interagency agreement; (2) stated that it was engaging the Defense Contract Audit Agency (DCAA) to audit the costs of two contractors that we reported as having a number of issues related to improper payments, and that it considers this to be a control that addresses some of our findings; (3) disagreed with our findings that Energy improperly paid $557,429 to Westwood for add-on rates and $98,305 to TDI for base fees; and (4) stated that its June 2004 transfer of $21.2 million was proper and that the large purchases of furniture and equipment near the end of the program were also proper. Energy also observed that the November 29, 2005, settlement and release between the government and SEA would appear to preclude the recovery of any additional money from the contractor for the improper and questionable payments that we identified in this report. In written comments reprinted in appendix III, SSC NOLA stated that it concurred with the two recommendations calling for it to reassess its procedures for carrying out its responsibilities in connection with interagency agreements and that it expects to complete actions on both recommendations by August 1, 2006. SSC NOLA separately provided technical comments, which we have incorporated as appropriate. Energy took issue with a number of our findings related to SEA contract payments because Energy did not agree that it had ultimate responsibility for the contract with SEA. Energy stated that it was the responsibility of the contractor to comply with the terms of its contract. Energy stated that it is a customer of SSC NOLA and, as such, had no direct contractual relationship with the contractor, SEA. Energy’s position is that SSC NOLA was responsible for conducting appropriate oversight and administration of contractor costs and that it had relied on SSC NOLA and GSA to ensure that SEA complied with the terms of its contract. Energy further stated that it deferred to SSC NOLA and GSA on issues with the SEA contract such as labor categories, billing rates, qualifications of personnel, and subcontractor arrangements and that those issues were the responsibility of SSC NOLA and GSA, not Energy. We disagree with Energy’s position that it had no responsibility as it relates to the propriety of the payments made to SEA. As discussed in the Background section of our report, in cases where authority for contract oversight and administration is delegated among multiple agencies, ultimate responsibility for the contract rests with the customer agency (receiving agency), in this case Energy. Energy cannot assign or delegate away its responsibilities through interagency agreements; the ultimate responsibility for ensuring the success of the contracted efforts as well as the propriety of payments remains with the receiving agency. In particular, the Economy Act requires that agencies ordering and paying for services under Economy Act agreements are responsible for ensuring that they receive the required services and pay for actual costs that were incurred by the performing agency, in this case SSC NOLA. While Energy may not have had sole responsibility for ensuring that payments made to SEA were proper, Energy was responsible for making sure that others were adequately conducting work on its behalf to ensure that program funds were not used to make questionable and improper payments. Notwithstanding its stated view that it was the responsibility of others and not Energy to conduct appropriate oversight and administration of the SEA contract, Energy stated that it did conduct observations and monitoring of SEA services. Energy further stated that it linked those observations to amounts billed each month through multiple levels of metrics and the contractor’s monthly cost report. However, we found that Energy did not receive any of the SEA billings under the interagency agreement. Also, we found that Energy had no processes in place to link the cost reports, any metrics it may have produced, or any observations it may have made to amounts that the contractor billed each month. Further, because information in the contractor’s monthly cost report was not compared to amounts billed each month, the procedures Energy described in its comments would be of limited value as a control process against improper payments and would not provide Energy with the necessary assurance that amounts subsequently paid to the contractor were appropriate. In response to a number of our findings related to improper payments made to Westwood and TDI, Energy stated that it is currently having DCAA audit these contracts. Energy further stated that these audits were initiated as part of its “normal course of business” and anticipated that many of the issues we cited would normally be identified as part of a DCAA audit. While we agree that a DCAA audit of contract costs can provide a detective control to help determine whether contractor costs were proper, reliance on an after-the-fact audit is not an acceptable replacement for the type of real-time monitoring and oversight of contractor costs-preventive controls- that we found to be lacking at Energy. Further, a DCAA audit of civilian contractor costs is not automatic and requires an additional cost to the government to procure. In addition, as stated in our report, the numerous issues that we identified with Westwood and TDI occurred over a 4-year period. It is important that Energy establish a control environment that includes control activities that prevent questionable or improper payments to begin with or that detects them soon after they occur so that they can be resolved in a timely manner, thus ensuring program funds are fully available to achieve the purposes of the program. Reliance on an audit by DCAA in 2006 or later of contractor activity that began in 2001 is not an efficient or effective approach to implementing proper internal controls over payments to contractors. Energy disagreed with our findings that it improperly paid $557,429 to Westwood in the form of a 12 percent add-on rate and also improperly paid $98,305 to TDI for base fees. Energy stated that Westwood had an approved rate of 13 percent for general and administrative expense that was verified by the CO in a DCAA pre-award audit. However, we found that the 13 percent general and administrative expense that Energy refers to was evaluated in the context of a price proposal for a cost plus contract. Further, under time and materials contracts like this one, general and administrative expenses are typically included in the hourly rate associated with each labor hour. While FAR 52.232-7 allows for “reasonable and allocable” material handling costs, including general and administrative expenses, for materials and subcontractors to the extent that they are “clearly excluded from the hourly rate,” neither Westwood nor Energy provided evidence during our review that any such costs were clearly excluded from Westwood’s labor rates. Therefore we stand by our conclusion that the 12 percent add-on rate to Westwood’s time and materials contract was improper. With respect to the improper payment to TDI of $98,305 in base fees, Energy stated that our approach of adding TDI’s fee to the fees associated with its subcontractor (Westwood) was not appropriate because there is no “base fee” in Westwood’s contract. Energy’s stated basis for its position was that the TDI contract was a cost-plus-award-fee type while TDI’s subcontract with Westwood was a cost-plus-fixed-fee type contract, with no base fee. However, as stated in our report, TDI did not have a subcontract or other binding agreement with Westwood for services Westwood provided to Energy that TDI subsequently included on its invoices to Energy. Further, Energy’s contract with TDI limited the amount of base fees to $82,847 applied to the level of effort (i.e., labor hours billed) provided by TDI and made no distinction between hours incurred by the prime contractor or any contractor viewed as a subcontractor. Thus, the contract terms necessitate considering, as we did, the base fees of TDI and Westwood together. As stated in our report, Energy paid a total of $181,152 in base fees when the maximum should have been $82,847, thus resulting in improper payments of $98,305. Energy stated that the information in our report showing that $21.2 million in funds was transferred to OWA in June 2004 during the same time that legislation was moving through Congress to transfer the administration of the program to DOL implies that Energy’s ramp-up activities were unsupportable. Energy also stated that using these funds to purchase furniture and equipment in the summer of 2004 was consistent with congressional approval of its reprogramming actions. Our report does not imply that Energy’s ramp-up activities were unsupportable. Our report provides extensive background information on the program, including the June 2004 transfer of $21.2 million in funds to OWA in an effort to clear the backlog of claims. This background information was provided for context. As discussed in our report, however, we did identify $1,162,919 in purchases of furniture and equipment and related storage costs that may not have been an efficient use of government funds given that Congress was giving consideration to transferring responsibility for the program to another agency. As discussed in the report, Energy ordered $748,409 of modular furniture in July 2004 that was to be installed in new work space to be occupied by claims processing personnel provided by SEA. However, by August 2004, OWA had initiated a hiring freeze and therefore placed the furniture in a storage facility, incurring costs of $72,720 for 12 months. Further, Energy paid the vendor $50,000 up front in 2004 for installation charges even though the furniture was not installed until February 2006, over a year and a half later, for use by another Energy program. Regarding our recommendation that Energy pursue opportunities for recovery of labor and other SEA costs that we identified as improper or questionable, Energy stated that the November 29, 2005, settlement agreement and release between the government and SEA would appear to preclude the recovery of any additional moneys for expenditures in support of Energy’s programs. Our report stated that we did not determine whether, or to what extent, the terms of the November 29, 2005, settlement and release with SEA may affect any potential additional monetary recoveries by the government for the questionable and improper payments made to SEA that we identified. We also stated in our report that the release clause of the settlement is limited to “covered conduct” investigated by the government related to two areas: billing indirect labor costs as direct labor costs and billing for employees in labor categories for which they were not qualified. It is important that Energy consult with DOJ in order to determine the recoverability of funds from SEA before concluding that the funds are not recoverable. Therefore, we reaffirm our recommendation. Discussions on other matters are provided following Energy's comments, which are reprinted in appendix II. SSC NOLA's comments are reprinted in appendix III. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its date. At that time, we will send copies to the Secretary of Energy and the Commanding Officer of SSC NOLA and interested congressional committees. Copies will also be made available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-9508 or calboml@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are acknowledged in appendix IV. For this review, we considered costs recorded by the Office of Worker Advocacy (OWA), the Department of Energy (Energy) office tasked with administering Subtitle D of the Energy Employees Occupational Illness Compensation Program Act of 2000 (EEOICPA) from October 2000 through September 30, 2005. Our review focused primarily on costs incurred under contracts for services by four contractors: Eagle Research Group, Inc. (Eagle); Science and Engineering Associates, Inc. (SEA); Westwood Group, Inc. (Westwood); and Technical Design, Inc. (TDI). In total, payments made to these contractors represented approximately $55.5 million, or 60 percent, of total program costs reported by the EEOICPA program through September 2005. Our work did not extend to the program costs for claims research activities at Energy facilities. These activities were performed by Energy’s major facility operating contractors that are subject to audit by Energy’s Inspector General and to other reviews by Energy’s financial management officials. To assess the reliability of OWA cost data for purposes of our review, we reviewed reconciliations of OWA costs to amounts in the audited Statements of Net Cost for fiscal years 2001 through 2004. In addition, for all OWA cost data for the period October 2000 through September 2005, we (1) obtained electronic files of OWA costs and performed electronic testing for obvious errors in accuracy and completeness, (2) reviewed supporting documentation for selected payments to contractors and other vendors and compared them to OWA cost data, and (3) reviewed documentation obtained from selected contractors of cumulative billings for OWA costs and compared these amounts to OWA cost data. Except for the cost tracking and reporting issues identified in the internal control section of this report and the questionable and improper payments that we identify, and the effect of any future actions taken by the agency to recover any improper payments, the OWA cost information we reviewed is considered reliable for purposes of this report. We performed the majority of our work in Washington, D.C., at Energy and Westwood. We also performed work at SEA and TDI offices in Albuquerque, New Mexico. Additionally, we observed Energy’s furniture inventory located in Laurel, Maryland. To determine whether Energy’s internal controls provided reasonable assurance that improper payments to contractors would not be made or would be detected in the normal course of business, we used Standards for Internal Control in the Federal Government as a basis to assess the internal control structure—control environment, risk assessment procedures, control activities, information and communications, and monitoring efforts of Energy over its OWA program. Further, we reviewed contractual agreements, including prime contracts with four contractors; Energy’s interagency agreement with the Space and Naval Warfare Systems Center, New Orleans, (SSC NOLA); and certain subcontract agreements provided by prime contractors. We also considered (1) prior GAO reports on the EEOICPA program, Subtitle D; (2) the results of the reviews by the inspectors general of the General Services Administration (GSA) (concerning SSC NOLA’s use of SEA’s schedule contract) and Energy (concerning Energy’s use of interagency agreements); and (3) a prior audit report of the Naval Audit Service concerning SEA services provided to SSC NOLA. We obtained and reviewed current Energy policies regarding contracting and financial management matters, including Energy’s Acquisition Guide and accounting handbook. We also conducted interviews with program, procurement, and financial management personnel regarding policies and procedures that were in place over contract payments, and walk-throughs of key processes, such as the invoice review and approval process, to gain an understanding of Energy’s controls over contract payments. We conducted similar interviews with SSC NOLA and GSA officials to assess Energy’s contractual relationship with its federal contracting partners. We compared Energy’s controls to those recommended in our Standards for Internal Control in the Federal Government. To determine whether Energy’s payments to contractors were properly supported and a valid use a government funds, we used a variety of data mining, document analysis, and other forensic auditing techniques to nonstatistically select transactions or groups of transactions for detailed review. For the transactions we selected, we reviewed supporting documentation to assess the appropriateness of payments based upon contract documents and applicable federal regulations, such as the Federal Acquisition Regulation (FAR), Federal Travel Regulation, and Joint Travel Regulations. While we identified some payments as questionable or improper, our work was not designed to identify all improper or questionable payments or to estimate their extent. For each of the major contracts, we obtained copies of invoices from the contractor and compared the amounts to a listing of payments made by Energy. In addition to this high-level analysis, we performed detailed tests on labor and other direct costs, as described below. We obtained an electronic file of all SEA labor charges, for both employees and subcontracted labor, for analysis. Based upon our review of the file for trends or anomalies, we nonstatistically selected 94 employees for testing. SEA provided personnel files containing supporting documentation, such as employee résumés, and company information, such as hire and termination dates. Because each person may have been billed under more than one labor category, we attempted to make 187 comparisons of personnel information to labor category requirements. SEA was unable to provide us with proper documentation of personnel obtained through temporary hiring agencies, thus preventing us from making 28 comparisons. We could not make an additional 72 comparisons because the contract’s labor categories did not contain adequate education or experience requirements. We made 87 comparisons of employee qualifications to labor category requirements. To review Westwood labor charges, we obtained compensation information, such as W-2s and 1099s, and compared certain amounts reported to underlying payment records and amounts Westwood billed the government for these costs. For 25 Westwood employees supporting OWA activities, we compared their education and experience requirements as documented on their résumés to the labor category requirements negotiated in its contract. We also compared the documentation of the two TDI employees’ qualifications to the contract labor category descriptions. Neither SEA nor Westwood provided a detailed list or breakdown of other direct costs as part of its invoice to Energy; therefore, we performed a preliminary review all the supporting documentation for these two contractors’ other direct costs. From this documentation we identified a high volume of the following types of transactions at each contractor, for which we performed a more detailed review. Payments for equipment and travel costs incurred by SEA. The supporting documentation we reviewed for these costs included expense vouchers, vendor invoices, travel authorization forms, and plane ticket receipts and itineraries. Payments for services provided by independent contractors incurred by Westwood. These services were mainly provided by the physician panel members. We obtained and reviewed the supporting invoices or other documentation of time and costs for 6 of 167 physician panel members billed. We chose these 6 physicians for detailed review because of the high volume of charges or unusual charges that we noted during our preliminary reviews of supporting documentation. We only considered their billings for fiscal year 2004, the year of the highest physician panel activity. In addition to this review of payments made to the major contractors, we analyzed other payments made by Energy in support of OWA. For example, we requested supporting documentation for a nonstatistical selection of payments based upon our analysis of payment information by payee and amount. We also performed an analytical review of Department of Labor (DOL) reimbursements under memorandums of understanding dated July 2001 and December 2004 for the operation of the EEOICPA resource centers. These reimbursements totaled approximately $11 million and were offset against costs for the Eagle contract and other OWA program activities. We reviewed the November 29, 2005, settlement agreement and release between the Department of Justice and SEA resulting from investigations by the government of alleged improper billing by SEA. We also reviewed the related January 2006 administrative settlement agreement between the Department of the Navy and SEA that provided for SEA to implement a compliance program to ensure that it adheres to lawful and ethical procedures and practices in all areas relating to its role as a government contractor. We provided Energy a draft of this report and SSC NOLA a draft of applicable sections of this report for review and comment. Energy’s Deputy Assistant Secretary of Planning and Administration, Office of Environment, Safety and Health, and SSC NOLA’s Commanding Officer provided written comments, which are reprinted in appendixes II and III, respectively. Energy and SSC NOLA also provided technical comments, which we have incorporated as appropriate. We also provided key officials of SEA, Westwood, Eagle, and TDI with draft summaries of the findings noted in this report relating to them. We incorporated as appropriate oral and written comments we received on these draft summaries from management officials from Westwood, Eagle, and TDI and from outside legal counsel for SEA. We performed our work in accordance with generally accepted government auditing standards in Washington, D.C., and at three contractor locations from February 2005 through March 2006. The following are GAO’s comments on the Department of Energy’s letter dated April 20, 2006. 1. See the Agency Comments and Our Evaluation section. 2. We have not included the acquisition letter attached to this letter in this report. The letter can be found on Energy’s Web site, www.doe.gov. 3. We did not assess or conclude on whether the government received “good value” from the contract with SEA. The scope of our work was to determine whether internal controls over program payments were adequately designed, and if program payments were properly supported as a valid use of government funds. We addressed financial management practices and procedures and whether payments were proper, not the value or quality of services received. 4. We modified the report for this additional information. 5. We disagree. Our report stated that the contract with Westwood did not fully reflect actual duties that were subsequently performed and provided three examples of such. The quotation provided by Energy in its comments is from the statement of work section of the Westwood contract entitled “Advisory Committee Activities” that only addresses the work that Westwood should perform in support of the Advisory Committee, and therefore this language does not address the other activities Westwood performed. 6. Energy states that a complete cost proposal was obtained from Westwood. However, Energy does not address that our report stated that Westwood’s cost proposal was not incorporated into the prime contact with TDI nor was any other binding agreement created between Energy and Westwood relative to its cost proposal. Thus, as our report also stated, without an effective contractual agreement, including negotiated rates, it was not possible for Energy to adequately review the amounts TDI billed for costs attributed to what Energy characterizes as the subcontractor, Westwood. We reaffirm this position. 7. A formal agreement between a prime and a subcontractor not only protects the interests of the parties involved, but also those of the government. Additionally, as discussed in the Agency Comments and our Evaluation section of our report, after-the-fact detective controls, such as the Defense Contract Audit Agency audits are not a replacement for real-time monitoring and oversight of contract costs. 8. At the time of our first inquiry, Energy told us it was still updating and finalizing the locations and conditions of the government-owned equipment that SEA purchased. On September 29, 2005, the Director of the Office of Information Management within the Office of Environment Safety and Health stated that “the identification and recording process is still underway.” This was 8 months after expiration of the SEA contract in December 2004. Our report also stated that an inventory of contractor-purchased, government owned equipment that Westwood purchased was not conducted until December 2005, nearly 9 months after Westwood’s contract expired, which Energy does not dispute. 9. We did not review Energy’s work in 2006 to address the missing items. 10. We do not agree with Energy’s statement that “at no time did Energy exceed a contractually negotiated and established contact ceiling.” Our report stated that Energy did not establish internal control monitoring practices to effectively manage overall contract costs, including using contract ceilings to manage and encourage cost-effectiveness. We found, for example, that Westwood was paid amounts that exceeded the “not to exceed” ceiling on other direct costs each year under its first contract, which covered the 3 ½ year period August 2001 through February 2005 as well as under the second contract that began in September 2004. As a specific example, the other direct cost ceiling was $600,000 in year 3, but Westwood was paid $2,421,176, or $1,821,176 more than the “not to exceed” limit per the contract. 11. Our report stated that the scope of our work was to determine whether internal controls over program payments were adequately designed and if program payments were properly supported as a valid use of government funds. The furniture purchases were considered by us in the context of both objectives. 12. Our report recognized that there are different interpretations of the time and material payment clause (FAR 52.232-7). For purposes of our report, we have based our findings on the literal application of this clause. Further, as also stated in our report, we found that neither Energy nor SSC NOLA for SEA exercised sufficient management oversight to be fully informed of the nature, extent, scope of services, and terms of billing to the government for subcontracted services. In addition, we found inconsistencies in the application of what Energy refers to in its comments as its “method established for T&M contracts of requiring both primes and subcontractors to be reimbursed for direct labor based on their own established fully burdened labor rates.” 13. Energy originally told us that the hiring freeze instituted in August 2004 was because of the combination of a probable continuing resolution and the possible transfer of the program to DOL. We modified the report based on Energy’s written comments. 14. We modified the report to reflect that the $50,000 of up-front installation charges were paid in 2004 even though the equipment “was not installed until February 2006.” Notwithstanding this, the furniture installation fee, like the furniture, did not benefit OWA but rather another Energy program. 15. We did not review the procedures described as a corrective action to address our finding that Energy’s monitoring of services provided under the Westwood contract was insufficient. However, it will be important that changes are made to comprehensively address the conditions we found at the contractor and at Energy. Our report stated that the productivity measures used by the contractor to monitor work performed by physician panel members were developed based on the hours actually billed on the invoices submitted by the physicians, and therefore Energy had no independent baseline to measure productivity or to assess the reasonableness of hours billed on the invoices submitted by the physicians. Further, our report stated that doctors who performed a quality check function on claims were also not sufficiently monitored. Energy did not systematically observe the hours worked and compare any observations to the amounts paid for those hours, nor did it determine that Westwood was adequately monitoring these services as a basis for its billings. 16. The contracting officer’s representative told us that supporting documentation for Westwood’s invoices was not requested because of the “voluminous amounts” of paper that Westwood would need to copy and transmit to Energy each month. Our report stated that one of the elements of the control weakness for other direct costs was that the contractor was not required to report a detailed breakdown of its other direct costs as stipulated by its contracts. Without this level of information, it was not possible for Energy officials to effectively review and approve these invoices for payment. Further, a substantial portion of the amount billed by Westwood for other direct costs was for temporary labor, and Energy’s controls would not, therefore, be enhanced by the corrective action put in place covering purchases of $50 or more. According to the implementation memo, this action is intended to address purchases such as government-owned equipment and travel, but does not specifically state whether temporary labor would be covered. It will be important for Energy to take further corrective actions that address enforcing its requirements for a detailed breakdown of other direct costs as well as controls specifically designed for temporary labor that are not covered by its new policy on purchases. 17. We disagree. Our report stated that Energy improperly assigned the costs of OWA services provided by Westwood to other program reporting units, in effect using other programs’ funds to pay for OWA activities. We found that these practices occurred throughout the 4 years of the program for both the Westwood and TDI contracts, not just for “a short period” as Energy stated in its written comments. In addition to the contact named above, staff members who made key contributions to this report include Robert Owens, Assistant Director; Marie Ahearn; Sharon O. Byrd; Richard Cambosos; Donald Campbell; Lisa Crye; Tyshawn Davis; Timothy DiNapoli; Abe Dymond; Ryan Geach; Jason Kelly; Dina Landoll; Patrick McCray; and Ruth S. Walk. | The Energy Employees Occupational Illness Compensation Program Act of 2000 (EEOICPA) authorized the Department of Energy (Energy) to help its former contractor employees file state workers' compensation claims for illnesses that could be linked to exposure to toxic substances during their employment. Concerned with the relatively small number of finalized cases and the overall effectiveness of the program, Congress asked GAO to review costs incurred by Energy to administer the program. Specifically, Congress asked GAO to determine whether (1) internal controls over program payments were adequately designed to provide reasonable assurance that improper payments to contractors would not be made or would be detected in the normal course of business and (2) program payments were properly supported as a valid use of government funds. Energy did not establish an effective control environment over payments to contractors or overall contract costs. Specifically, because Energy lacked an effective review and approval process for contractor invoices, it had no assurance that goods and services billed had actually been received. Although responsibility for review and approval of invoices on the largest contract rested with the Space and Naval Warfare Systems Center, New Orleans (SSC NOLA) through an interagency agreement, Energy did not ensure that SSC NOLA carried out proper oversight. Energy also failed to maintain accountability for equipment purchased by contractors. Further, subcontractor agreements, which represented nearly $15 million in program charges, were not adequately assessed, nor were overall contract costs sufficiently monitored or properly reported. These fundamental control weaknesses made Energy highly vulnerable to improper payments. GAO identified $26.4 million in improper and questionable payments for contractor costs, including billings of employees in labor categories for which they were not qualified or that did not reflect the duties they actually performed, the inappropriate use of fully burdened labor rates for subcontracted labor, add-on charges to other direct costs and base fees that were not in accordance with contract terms, and various other direct costs that were improperly paid. Further, certain payments toward the end of the program for furniture and computer equipment may not have been an efficient use of government funds. These improper and questionable payments represent nearly 30 percent of the $92 million in total program funds spent through September 30, 2005, but could be even higher given the poor control environment and the fact that GAO only reviewed selected program payments. |
Medicaid funds most publicly supported long-term care services for persons with developmental disabilities. In 1995, Medicaid provided more than $13.2 billion to support over 275,000 individuals with these services. To be eligible for Medicaid, individuals must generally meet federal and state income and asset thresholds. To be considered developmentally disabled, individuals must also have a mental or physical impairment, with onset before they are 22 years old, that is likely to continue indefinitely and they must be unable to carry out some everyday activities, such as making basic decisions, communicating, taking transportation, keeping track of money, keeping out of danger, eating, and going to the bathroom, without substantial assistance from others. Until recently, states provided the bulk of services for this population through the Medicaid ICF/MR program. The ICF/MR program funds large institutions and smaller settings of 4 to 15 beds, and both sizes of settings are subject to the same regulatory standards. ICF/MR program services are available and provided as needed on a 24-hour basis. These services include medical and nursing services, physical and occupational therapy, psychological services, recreational and social services, and speech and audiology services. ICF/MR program services also include room and board. Providers of ICF/MR program services must adhere to an extensive set of regulations and are subject to annual on-site inspections as mandated by Medicaid. In 1981, the Congress enacted the 1915(c) waiver allowing states to apply to HCFA for a waiver of certain Medicaid rules to offer home and community-based services. By 1995, 49 states had 1915(c) home and community-based waiver programs for persons with developmental disabilities. Waiver program services vary by state, but include primarily nonmedical services such as chore services, respite care, and habilitation services, which are all intended to help people live more independently and learn to take care of themselves. (See apps. II and III for a list of waiver program services and definitions in the three states we visited). Unlike ICF/MR program services, waiver program services do not include room and board and are often provided on less than a 24-hour basis. HCFA carries out its waiver program oversight responsibilities through review of applications and renewals and monitoring of implementation through on-site compliance reviews. In approving waivers, HCFA reviews applications to ensure that (1) services are offered to individuals who, “but for the provision of such services . . . would require the level of care provided” in an institutional setting such as an ICF/MR; (2) total Medicaid per capita costs for waiver program recipients are not greater than total Medicaid per capita costs for persons receiving institutional care; and (3) states properly assure quality. The waiver program enables states to control utilization and costs in ways not permitted under the regular Medicaid program. The waiver program has a cap for the number of persons served at HCFA-approved levels. It also allows states, with HCFA permission, to target services to distinct geographic areas or populations, such as persons with developmental disabilities or the elderly; offer a broader range of services; and serve persons with incomes somewhat higher than normal eligibility thresholds. In contrast, the regular Medicaid program generally requires that each state provide eligible beneficiaries with all federally mandated services and any optional services it chooses to offer. States, however, provide some community-based services to developmentally disabled individuals through the regular Medicaid program. These services include federally mandated services, such as home health care, and other services that states may elect to provide, which are called optional services. Some of the more important optional services for the population with developmental disabilities are rehabilitative services, case management, and personal care. Because the regular Medicaid program operates as an entitlement—that is, all eligible individuals in a state are entitled to receive all services offered by the state—states have less control over utilization and the cost of services than in waiver programs. Through the use of waivers, states have changed long-term care nationally for persons with developmental disabilities in two ways. First, states have significantly expanded the number of individuals being served. Second, states have shifted the program balance from serving most people through the ICF/MR program to serving most through the waiver program. Generally the shift to the waiver program has been part of an evolution of services away from large and more restrictive settings to providing services in small and less restrictive settings, which are preferred by recipients and their families. Some state waiver programs are continuing to evolve from their earlier approach of providing services primarily in group home settings to one of serving people at home. From 1990 to 1995 the number of persons served by the waiver and ICF/MR programs combined rose at an average annual rate of 8 percent (see table 1). The number served by the waiver program more than tripled to over 142,000 persons during this period and accounted for the entire increase in the number of persons served by both programs. States dramatically increased the number of people who received waiver program services using a variety of strategies, including substituting waiver program for ICF/MR program services, services provided under state-only programs, and services to persons who were not being served before. More people are now served through the waiver program than the ICF/MR program. Although the percentage of persons served through the waiver program varies by state, 30 states provide services to more people through the waiver program than the ICF/MR program (see fig. 1). With the support of recipients and their families, state officials have made changes to serve more people through the waiver program. All three groups have come to believe that the alternatives possible through the waiver can better serve persons with developmental disabilities. They believe that in many cases individuals can have a higher quality of life through greater community participation, including relationships with neighbors, activities in social organizations, attendance at public events, and shopping for food and other items. This can result in expanded social networks, enhanced family involvement, more living space and privacy, and improvements in communication, self-care, and other skills of daily living. States believed that they could use the waiver program to expand services while simultaneously reducing or limiting access to ICF/MR program care as a means to control growth in expenditures. As a result, many states have closed large institutions or held steady ICF/MR capacity even as the population in need has grown. Some states have also reduced smaller ICF/MR settings by converting them to waiver programs. The number of people in ICF/MR settings has dropped 7 percent from 1990 to 1995. These actions have been part of an overall strategy to change the way services are provided and financed. States have used the flexibility of the waiver program to pursue distinct strategies and achieve different program results as shown in the three states we visited (see table 2). These states used the waiver program to substitute for ICFs/MR that were being closed, expand the number of persons being served, or both. Rhode Island targeted waiver program services as a substitute for ICF/MR program care with little change in the number of persons served. The state began the 1990s with short waiting lists for services and a goal of closing all large institutions of 16 or more beds. Providing waiver program services to many of its former residents, the state closed the Ladd Center, its last large institution, in 1994 to become one of only two states along with the District of Columbia to close all its large institutions. Rhode Island also substantially reduced the number of recipients of services in smaller ICFs/MR by converting the ICFs/MR to the waiver program. As a result, a substantial number of persons who had been supported through the state’s ICF/MR program are now supported by its waiver program. The number of developmentally disabled persons served through the waiver and ICF/MR programs in Rhode Island, however, did not expand significantly. In contrast, Florida’s strategy for the waiver program was to expand services to a much broader population rather than using the waiver program to close ICF/MR settings. Florida began the 1990s with substantial waiting lists for services and fewer ICF/MR beds than most of the country relative to the size of the population with developmental disabilities. Florida chose to greatly expand the number of persons with developmental disabilities served to include people who had not been served or who needed more services. The overwhelming source of growth has been from the large increase in waiver program recipients, although Florida has also experienced modest growth in the number of ICF/MR recipients. The state’s increase in waiver program recipients includes persons who were receiving services from state-only programs and persons who were not previously served. Michigan used the waiver program in the 1990s to continue pursuing its goals of closing large institutions, offering placements for persons leaving small ICFs/MR, and expanding services to those with unmet needs. Michigan, like Florida, began the 1990s with many persons who needed but had not received services. Michigan, however, had more ICF/MR capacity than Florida. Most of Michigan’s ICF/MR capacity was in smaller settings, many of which had been developed to help the state close some of its large institutions. As a result, Michigan has closed all but about 400 beds in large institutions and significantly increased the number of persons served. State officials told us that by 1995, Michigan was serving more individuals in the waiver program than in its ICF/MR program. In the continuing evolution of services for persons with developmental disabilities, some states, such as Florida, Michigan, and Rhode Island, are changing the focus of waiver program services from group home care to more tailored services to meet individuals’ unique needs and preferences at home. These states and most others began their waiver programs by providing services primarily in group homes. Recently, state officials have come to believe that for many persons, services are best provided on a more individualized basis in a recipient’s home—his or her family’s home or own home or an adult foster care home—rather than in group home settings. The three states we visited became convinced that this was possible even for persons with severe disabilities, in part, because of their success in using this approach in the recently concluded Community Supported Living Arrangements (CSLA) program. Slightly more than one-half of all waiver program recipients nationally are estimated to have been living in settings other than group homes in 1995.In each of the three states we visited, many 1915(c) waiver recipients now live in their family’s home or their own home. In Florida, more than one-half of all waiver recipients live in settings other than group homes, including nearly 50 percent who live in their family’s homes. The majority of Michigan’s waiver program recipients live in small settings other than licensed group homes. Just under one-half of Rhode Island’s recipients live in settings other than group homes. Each state expects the percentage of waiver program recipients living in nongroup home settings to increase. Officials in the states we visited and other experts told us that serving individuals with developmental disabilities who live in their own or their family’s home and receive less than 24-hour support often requires changes in the service delivery model. For example, these settings may need environmental changes and supports to make them suitable for persons with developmental disabilities. Such changes could include the installation of ramps for persons with physical disabilities or emergency communication technology and other equipment for persons with communication or cognitive impairments or a history of seizures who may need quick assistance. Paid assistance may also be needed to provide a variety of other services, such as supervision of or assistance in toileting, dressing, bathing, carrying out routine chores, managing money, or accessing public transportation and other community services. Assistance for such services is often provided on an individual basis rather than for several persons in a group home. Respite care may also be provided for family caregivers. Although the three states we visited have made major commitments to convert their waiver programs to individualized supports at home, these changes will require significant change on the part of everyone involved and could take years to fully implement. For example, some public agencies own or have long-term contracts for the use of group homes or have encouraged the development of private group homes. In addition, state officials told us that public agencies and other service providers may find it difficult to adapt to designing services for each individual living at home rather than offering services in the more familiar group home program setting. In addition, some family members and advocates have expressed concern that the level of funding available for and the range of services offered under the waiver program may not be sufficient for individuals who require constant supervision and care. Nationwide, Medicaid costs for long-term care services for persons with developmental disability rose at an average annual rate of 9 percent between 1990 and 1995 as states implemented their planned increases in the number of persons served. Costs rose from $8.5 billion in 1990 to $13.2 billion in 1995. (See table 3.) Most of the increase reflected increased costs for waiver program services, but increased ICF/MR program costs also were a factor. Waiver program costs grew primarily because more people were served as per capita waiver costs increased slightly less than inflation. ICF/MR program cost increases resulted solely from growth in per capita ICF/MR program costs, which rose somewhat faster than inflation, as the number of residents declined. In 1995, per capita waiver program costs ($24,970) remained significantly lower than per capita ICF/MR spending ($71,992). In the three states we visited, average per capita costs and average increases in per capita costs varied according to each state’s waiver program strategy and other factors (see table 4). Florida per capita waiver costs, for example, were among the lowest in the nation, in part, as a result of the state’s strategy to expand services to more persons. According to state officials, limited resources were stretched to cover as many people as possible by providing each individual with the level of services required to prevent institutionalization rather than providing all the services from which an individual might benefit. By contrast, from 1990 to 1994 Rhode Island’s per capita costs under the waiver and ICF/MR programs were much higher than the national average.The large increase in per capita waiver program costs resulted because unlike Florida and Michigan, Rhode Island substituted waiver program services for persons receiving high-cost ICF/MR care and closed its last large institution. As a result, Rhode Island was serving a substantial number of persons through the waiver program who had previously received expensive ICF/MR care. At the same time, ICF/MR per capita costs were also higher, in part, because as the number of people in ICF/MR settings declined, the fixed costs were spread over a smaller population. In addition, the population that remained in ICF/MR settings was substantially disabled and required intensive services. Cost growth has been limited by two factors. First is a cap on the number of program recipients. Second, states have employed a variety of management practices to control per capita spending. Fundamental to waiver program cost control has been the federal Medicaid rule which, in effect, capped the number of recipients who could have been served each year. HCFA approves each state’s cap, and states are allowed to deny admission for services to otherwise qualified individuals when the cap is reached. By contrast, under the regular Medicaid program, all eligible recipients must be served and no limits exist on the number of recipients. As a result, waiver caps have given states a greater ability to control access and thereby cost growth than would have been possible if they had expanded services through the regular Medicaid program. States have also used several management practices to help contain costs. In the three states we visited, these management practices include fixed agency budgets for waiver services and linking management of care plan and use of non-Medicaid services to individual budgets for each person served. States have developed fixed agency budgets within limits established under waiver rules. In Florida, Michigan, and Rhode Island, appropriations for waiver program and other services are in the budgets of developmental disability agencies. In Florida, budgets are allocated among 15 state district offices. In Michigan, budgets for serving persons with developmental disabilities are allocated among 52 local government community mental health boards and three state-operated agencies, each responsible for serving a local area. State or local agencies are responsible for approving individual service plans, authorizing budgets for the costs of these services, and monitoring program expenditures on an ongoing basis to ensure that total expenditures are within appropriated budgetary amounts as the three states transition to a person-centered planning basis in their waiver programs. The three states we visited require that case managers or service providers in consultation with case managers develop a plan of care linked to an individual budget for each person being served in the person-centered planning approach. This care plan and its costs must be approved by the state developmental disability agency, state district office, or community mental health board, depending upon the state. Upon agency approval, the case manager oversees the implementation of the care plan and monitors it on an ongoing basis. Significant variation from the plan requires agency approval and changes in service and budget authorizations. This process provides more stability for the budget process and allows state agencies to monitor their overall spending on an ongoing basis and plan for contingencies to remain within budget levels. State developmental disability agencies in the three states we visited also require that case managers build into the care planning process and individual budget determination the use of non-Medicaid services, both paid and unpaid. State officials told us that this is a part of better integration of persons with developmental disabilities into the community and making it possible to extend available waiver dollars to serve as many people as possible. When paid services are needed, states try to take advantage of services funded for broader populations, such as recreation or socialization in senior citizen centers or the use of public transportation. States also attempt to use unpaid services when possible by increasing assistance from families, friends, and volunteers. State officials told us that use of these paid and unpaid services reduces the need for Medicaid-financed supervision and care. A change in federal rules could result in high waiver caps on enrollment and therefore higher costs. Until August 24, 1994, HCFA limited the number of waiver recipients in a state under the so-called cold bed rule. This rule required that each state document for HCFA approval that it either had an unoccupied Medicaid-certified institutional bed—or a bed that would be built or converted—for each individual waiver recipient the state requested to serve in its application. However, in 1994, HCFA eased waiver restrictions by eliminating the cold bed rule so that states were no longer required to demonstrate to HCFA that they had “cold beds.” HCFA took this action because it believed that the cold bed rule placed an unreasonable burden on states by requiring them to project estimates of additional institutional capacity. HCFA now accepts a state’s assurance that absent the waiver the people served in the waiver program would receive appropriate Medicaid-funded institutional services. As HCFA recognized when it eliminated the cold bed rule, this change could result in higher waiver costs if states elect to increase the number of waiver recipients more rapidly than before. HCFA, however, recognized that the state budget constraints could play a restrictive role in waiver growth. State officials told us that elimination of the cold bed rule allows them to expand waiver services more rapidly than in the past, both to persons not currently receiving services and to others receiving services from state-only programs. State officials told us that converting state program recipients to the waiver was particularly advantageous given the federal Medicaid match. Officials in Florida and Michigan told us that they are planning to expand the number of people served in the waiver program more rapidly than they could have under the cold bed rule. This could increase costs more rapidly than in the past. Officials in Florida and Michigan said that they will phase in increases in the number of waiver recipients to stay within state budget constraints and to allow for a more orderly expansion of services to the larger numbers of new recipients. To increase quality for recipients and families, states are introducing promising quality assurance innovations while simultaneously building in more flexibility in traditional quality assurance mechanisms. These changes are intended to provide recipients and families with a greater choice of services within appropriate budget and safety limits. However, until states more comprehensively develop and test these approaches, some recipients may face health and safety risks and others may not have access to the range of choices state programs seek to provide. One of the most important mechanisms that states use to assure adequate quality is service standards. Each state, as required by HCFA guidelines, adopts or develops standards for each waiver service. Waiver standards are specified in state and local laws, regulations, or operating guidelines and are enforced by specific agencies. As a result, waiver standards reflect specific state processes and choices in how states assure quality, and are not uniform across the nation as are ICF/MR standards. (For example, see app. IV for a summary of how Florida meets HCFA requirements for specifying waiver standards.) Waiver standards may include professional licensing standards, minimum training requirements for staff, and criminal background checks for providers. The standards may also include requirements for certification of group home or other facilities and compliance with local building codes and fire and safety requirements. States review providers and services on an ongoing basis and have abuse and neglect reporting procedures in place. Florida, Michigan, and Rhode Island, for example, conduct routine and unannounced reviews of providers. As a result of these reviews, providers can be required to provide plans of correction for identified problems and implement improvements. In some cases, providers have lost their certification to participate in the program. These states also have formal grievance procedures and a grievance unit, such as a state agency or human rights committee, to investigate complaints on a statewide, regional, or agency basis. Through these processes, the states have also identified problems in quality and taken steps to ensure corrective action. In addition to state quality assurance efforts, HCFA regional staff conduct a compliance review of each state’s waiver program before its renewal. HCFA uses a compliance review document for this process. HCFA reviews involve random selections of recipients for interviews and visits to their homes. The reviews also involve interviews with and visits to service providers and advocates. If HCFA determines that quality is not satisfactory, it can require that a state take corrective action before a waiver can be renewed. States are taking steps to develop or enhance existing mechanisms to promote better quality in waiver program services. Many of these mechanisms were used in the recently concluded CSLA program to provide individualized services to people at home and are now being incorporated into the home and community-based waiver program even for persons with substantial disabilities. Advocates, family members, and recipients have been generally positive about this shift to support individuals in more integrated community settings. Person-centered planning is a key element of providing better quality in waiver services, according to officials in the three states we visited and national experts. The planning process and the resulting plans are individualized to incorporate substantial recipient and family input on how the individual will live and what assistance the individual will need. The case manager, called support coordinator in some states, has primary responsibility in person-centered planning, which includes working with the recipient to develop the plan, arranging for needed services, monitoring service delivery and quality, and revising the plan as necessary. A budget for the individual is established to provide the services identified as appropriate and cost-effective. Recipients and case managers choose providers on the basis of their satisfaction with services. State officials told us that this approach not only gives recipients more say in how they are served but that the resulting competition motivates providers to increase service quality. Linking persons living in the community with volunteers who can provide assistance and serve as advocates is seen as another important mechanism for promoting quality. For example, some states, including the three we visited, have a circle of friends or similar process for individual recipients. A circle of friends is a group of volunteers, which can include family, friends, community members, and others, who meet regularly to help persons with disabilities reach their goals. These volunteers help plan how to obtain needed supports; help persons participate in community, work, or leisure activities they choose; and try to help find solutions to problems. By integrating recipients in the community, recipients have more choice and can get better quality services, according to national experts and state officials we interviewed. This community integration increases the number of persons who can observe and identify problems in service quality and notify appropriate officials when there are deficiencies. Because program quality depends on the active participation of recipients, families, and service providers, states are also providing substantial training to these groups to encourage and strengthen their participation. Training can include informing recipients and families of available service providers, procedures for providing feedback about services, and steps to take if quality is not improved. Training for service providers may focus on reinforcing the fact that the recipient and family have the right to make choices about services and that staff must be responsive to those choices unless they are inappropriate for safety concerns or for other compelling reasons, such as available financial resources. States are also modifying how they monitor quality. Traditionally, they emphasized compliance with certain criteria, such as maintaining a minimum level of staff resources and implementing standard care processes. Some states are focusing their quality monitoring more on outcome measures for each individual while still assessing providers’ compliance with program standards. For example, states, including the three we visited, are trying to determine whether the recipients are living where and with whom they chose, whether they are safe in this environment, and whether they are satisfied with their environment and the services they receive. States are also attempting to make their oversight less intrusive for the recipients. For example, some states use trained volunteers to interview recipients at their homes on a periodic basis to check the quality of services received. In other instances, although case managers are required to meet recipients on a regular basis, meetings can be arranged at the recipient’s convenience, including in the evening or on weekends or at a place the recipient likes to meet at, such as at his or her home or local park or library. Case managers talk with the recipients and their families about the quality of the services they receive and take any actions necessary to correct deficiencies. While officials in the three states we visited and other experts agree that many persons prefer services provided at home to services provided in institutions or other group settings, they also note that providing services at home presents unique problems in ensuring quality. Because the new focus is on providing individual choice, the types of services that are offered and the means for providing these services can vary greatly. To promote quality and ensure that minimum standards are met requires a broad range of approaches. Although states continue to develop quality assurance mechanisms, state officials acknowledge that these are not yet comprehensive enough to assure recipient satisfaction and safety. In the three states we visited, state officials and provider agencies told us that they are still developing guidance and oversight in a number of key areas. Michigan, for example, is revising its case management standards and statewide quality assurance approaches. Rhode Island is developing a more systematic monitoring approach statewide, and Florida is continuing to implement and evaluate its independent service coordinator approach. One of the greatest difficulties in developing quality mechanisms for services in alternative settings is balancing individual choice and risks.Where greater choice is encouraged and risks are higher, more frequent monitoring and contingency planning need to be built into the process. Yet some professional staff and agency providers in the states we visited believe that they do not have sufficient guidance on where to draw the line between their assessment of what is appropriate for the disabled person and the individual’s choice. For example, some persons with mental retardation cannot speak clearly enough to be understood by people who do not know them; cannot manage household chores, such as cooking in a safe manner; or have no family member to perform overall supervision to keep them from danger. Yet these people express a desire to live independently, without 24-hour staff supervision. Florida, Michigan, and Rhode Island each attempt to customize supports to reduce risks for individuals who live in these situations. They may arrange for roommates, encourage frequent visits and telephone contact by neighbors and friends, enroll individuals in supervised day activities, install in-home electronic access to emergency help, and provide paid meal preparation and chore services. As this new process evolves, states and providers seek to develop a better understanding of how to manage risks and reduce them where possible. This should lead to improved guidance for balancing risks and choices for each recipient’s unique circumstances. Determining what recipients’ choices are can be difficult for a number of reasons. First, many of these individuals have had little experience in making decisions and may also have difficulty in communicating. In addition, some recipients have complained that they are not being provided the range of choices to which they should have access and that quality monitoring is too frequent or intrusive despite the changes states have introduced. However, concern has been expressed that quality assurance is not rigorous enough to reduce all health or safety risks and that the range of choices is too great for some individuals. State officials and other experts we interviewed have emphasized the need for vigilance to protect recipients and ensure their rights. They have been especially concerned with assuring quality for recipients who are unable to communicate well and for those who do not have family members to assist them. The states we visited are taking special precautions to try to assure quality in these cases—such as recruiting volunteers to assist and asking recipient groups to suggest how to assure quality for this vulnerable population. However, state officials and HCFA agree that more development of quality assurance approaches is needed. Officials from the Office of Long-Term Care Services in HCFA’s Medicaid Bureau and from Florida, Michigan, and Rhode Island reviewed a draft of this report. They generally agreed with its contents and provided technical comments that we incorporated as appropriate. We are sending copies of this report to the Secretary of Health and Human Services; the Administrator, Health Care Financing Administration; and other interested parties. Copies of this report will also be made available to others upon request. If you or your staff have any questions, please call me at (202) 512-7119; Bruce D. Layton, Assistant Director, at (202) 512-6837; or James C. Musselwhite, Senior Social Science Analyst, at (202) 512-7259. Other major contributors to this report include Carla Brown, Eric Anderson, and Martha Grove Hipskind. We focused our work on Medicaid 1915(c) waivers for adults with developmental disabilities. We also examined related aspects of institutional care provided through ICF/MR, state plan optional services, and the CSLA program, all under Medicaid. To address our study objectives we (1) conducted a literature review, (2) interviewed national experts on mental retardation and other developmental disabilities, (3) collected national data on expenditures and the number of individuals served, and (4) collected and analyzed data from three states. National experts interviewed included officials at HCFA; the Office of the Assistant Secretary for Planning and Evaluation (ASPE) in the Department of Health and Human Services; the Administration on Developmental Disabilities; the President’s Committee on Mental Retardation; the National Association of Developmental Disabilities Councils; the Administration on Aging; the National Association of State Directors of Developmental Disabilities Services, Inc. (NASDDDS); and the ARC, formerly known as the Association for Retarded Citizens. We also interviewed researchers at University Affiliated Programs (UAP) on developmental disabilities at the Universities of Illinois and Minnesota and Wayne State University. We conducted our case studies in Florida, Michigan, and Rhode Island. We chose these states for several reasons. The three states provide a range of state size and geographic representation. Each state has a substantial developmental disability waiver program that serves more people than its ICF/MR program. Experts told us that these states would provide examples of different state strategies for utilizing the Medicaid waiver. This included their policies regarding large and small institutions as well as the design and implementation of their waiver programs. The three states also have important differences in the administrative structure of their developmental disability programs. Rhode Island administers its waiver program statewide through the Division of Developmental Disabilities in the Department of Mental Health, Retardation and Hospitals. Florida places statewide administration and oversight responsibility for its waiver program in Developmental Services, the Department of Health and Rehabilitative Services, but operational responsibility rests with its 15 district offices of Developmental Services. Michigan places statewide administration and oversight responsibility for its waiver programs in the state Department of Mental Health, but operating responsibilities rest with 52 Community Mental Health Boards (CMHB), which are local government entities covering one or more counties and three state-operated agencies each responsible for serving a local area. Florida district offices and Michigan CMHBs have discretion in the design and implementation of waiver program and other services within the broad outlines of state policy. We visited each state to conduct interviews with state and local officials, researchers, service providers, advocates, families, and recipients. These interviews included state Medicaid officials and developmental services officials and officials in agencies on aging and developmental disability councils. In Florida, we also visited state district offices in Pensacola and Tallahassee to conduct interviews with district government and nongovernment representatives. In Michigan, we visited the Detroit-Wayne and Midland/Gladwin CMHBs to conduct interviews with government and nongovernment representatives. We followed up with state agencies to collect additional information. The national waiver and ICF/MR program expenditure and recipient data used in this report are from the UAP on developmental disabilities at the Research and Training Center on Community Living, Institute on Community Integration, at the University of Minnesota. The Institute collects these data, with the exception of ICF/MR expenditures, directly from state agencies. The Institute uses ICF/MR expenditure data, compiled by the Medstat Group under contract to HCFA. National data from the Institute were available through 1995. The expenditure and recipient data we report for Florida, Michigan, and Rhode Island were provided to us by the state agencies responsible for developmental services and the Medicaid agencies. The latest complete data available from these three states were for 1994. We therefore used 1994 national data for comparison purposes. Some differences occur in the recipient counts among the national data we used from the Institute and data we collected from agencies in Florida, Michigan, and Rhode Island. These differences could affect some aspects of our comparisons of national trends and trends in the three states. Institute data on recipients show the total number of persons receiving services on a given date—June 30 of each year—whereas data for the three states show the cumulative number of persons receiving services over a 12-month period. Therefore, data supplied by the states could result in a larger count of program recipients than the methodology used by the Institute. This could have the impact of making per capita expenditure calculations smaller for the state data than for the national data. Our comparisons of data from the two sources, however, showed few substantial differences in the data for the three states. We excluded children from our analysis because (1) their needs are different in many respects from those of adults, (2) family responsibilities for the care of children are more comprehensive than for adults, and (3) the educational system has the lead public responsibility for services for children. Recipient and expenditure data in this report, however, include some children because it was not possible to systematically exclude them. However, the percentage of children in these services is small. In 1992, for example, about 11 percent of ICF/MR service recipients were less than 21 years old. We conducted our review from May 1995 through May 1996 in accordance with generally accepted government auditing standards. States, with HCFA’s approval, choose which services they offer through waiver programs and how the services are defined. States can choose from a list of standard services and definitions in the HCFA waiver application or design their own services. In designing their own services, states can add new services or redefine standard services. States can also extend optional services to offer more units of these services to waiver program recipients than are available to other recipients under the regular Medicaid program. The three states we visited chose to offer a number of standard services under their waiver program. Each state also modified the definition of some standard services that it provides or offered services not on the standard waiver list. (See fig II.1.) For example, Florida modified the definition of case management to include helping individuals and families identify preferences for services. Florida also added several nonstandard, state-defined services such as behavior analysis and assessments and supported living coaching. Rhode Island’s modified definition of homemaker services includes a bundle of services often offered separately, including standard homemaker services, personal care services, and licensed practical nursing services. Rhode Island also added nonstandard services to provide minor assistive devices and support of family living arrangements. Michigan modified the standard definition of environmental accessibility adaptations to include not only physical adaptations to the home, but to the work environment as well. Michigan also recently added a new state-defined service, community living supports, which is a consolidation of four services—in-home habilitation, enhanced personal care, personal assistance, and transportation— previously provided separately. Florida and Michigan also chose to offer several optional services in their waiver programs. Rhode Island’s definition of homemaker includes not only homemaker services as typically defined, but personal care and licensed practical nursing services as well. The HCFA definition for each standard waiver service offered in Florida, Michigan, and Rhode Island is shown in appendix III. This appendix shows HCFA’s definition for each standard waiver service offered in Florida, Michigan, and Rhode Island. These service names and definitions are written as they appear in the latest version of the HCFA 1915(c) waiver application format, dated June 1995. Because states have the flexibility to modify these definitions, the definitions and how services are implemented vary among the states. Adult Companion Services: socialization, provided to a functionally impaired adult. Companions may assist or supervise the individual with such tasks as meal preparation, laundry and shopping, but do not perform these activities as discrete services. The provision of companion services does not entail hands-on nursing care. Providers may also perform light housekeeping tasks which are incidental to the care and supervision of the individual. This service is provided in accordance with a therapeutic goal in the plan of care, and is not purely diversional in nature. Non-medical care, supervision and Case Management: Services which will assist individuals who receive waiver services in gaining access to needed waiver and other State plan services, as well as needed medical, social, educational and other services, regardless of the funding source for the services to which access is gained. Chore Services: Services needed to maintain the home in a clean, sanitary and safe environment. This service includes heavy household chores such as washing floors, windows and walls, tacking down loose rugs and tiles, moving heavy items of furniture in order to provide safe access and egress. These services will be provided only in cases where neither the individual, nor anyone else in the household, is capable of performing or financially providing for them, and where no other relative, caregiver, landlord, community/volunteer agency, or third party payor is capable of or responsible for their provision. In the case of rental property, the responsibility of the landlord, pursuant to the lease agreement, will be examined prior to any authorization of service. Environmental accessibility adaptations: Those physical adaptations to the home, required by the individual’s plan of care, which are necessary to ensure the health, welfare and safety of the individual, or which enable the individual to function with greater independence in the home, and without which, the individual would require institutionalization. Such adaptations may include the installation of ramps and grab-bars, widening of doorways, modification of bathroom facilities, or installation of specialized electric and plumbing systems which are necessary to accommodate the medical equipment and supplies which are necessary for the welfare of the individual. Excluded are those adaptations or improvements to the home which are of general utility, and are not of direct medical or remedial benefit to the individual, such as carpeting, roof repair, central air conditioning, etc. Adaptations which add to the total square footage of the home are excluded from this benefit. All services shall be provided in accordance with applicable State or local building codes. Family Training: Training and counseling services for the families of individuals served on this waiver. For purposes of this service, "family" is defined as the persons who live with or provide care to a person served on the waiver, and may include a parent, spouse, children, relatives, foster family, or in-laws. "Family" does not include individuals who are employed to care for the consumer. Training includes instruction about treatment regimens and use of equipment specified in the plan of care, and shall include updates as necessary to safely maintain the individual at home. All family training must be included in the individual’s written plan of care. Habilitation: Services designed to assist individuals in acquiring, retaining and improving the self-help, socialization and adaptive skills necessary to reside successfully in home and community-based settings.This service includes: retention, or improvement in skills related to activities of daily living, such as personal grooming and cleanliness, bed making and household chores, eating and the preparation of food, and the social and adaptive skills necessary to enable the individual to reside in a non-institutional setting. Payments for residential habilitation are not made for room and board, the cost of facility maintenance, upkeep and improvement, other than such costs for modifications or adaptations to a facility required to assure the health and safety of residents, or to meet the requirements of the applicable life safety code. Payment for residential habilitation does not include payments made, directly or indirectly, to members of the individual’s immediate family. Payments will not be made for the routine care and supervision which would be expected to be provided by a family or group home provider, or for activities or supervision for which a payment is made by a source other than Medicaid. -- Day habilitation: Assistance with acquisition, retention, or improvement in self-help, socialization and adaptive skills which takes place in a non-residential setting, separate from the home or facility in which the individual resides. shall normally be furnished 4 or more hours per day on a regularly scheduled basis, for 1 or more days per week unless provided as an adjunct to other day activities included in an individual’s plan of care. Day habilitation services shall focus on enabling the individual to attain or maintain his or her maximum functional level and shall be coordinated with any physical, occupational, or speech therapies listed in the plan of care. In addition, they may serve to reinforce skills or lessons taught in school, therapy, or other settings. -- Prevocational services not available under a program funded under section 110 of the Rehabilitation Act of 1973 or section 602(16) and (17) of the Individuals with Disabilities Education Act (20 U.S.C. 1401 (16 and 17)). Services are aimed at preparing an individual for paid or unpaid employment, but are not job-task oriented. Services include teaching such concepts as compliance, attendance, task completion, problem solving and safety. Prevocational services are provided to persons not expected to be able to join the general work force or participate in a transitional sheltered workshop within one year (excluding supported employment programs). Prevocational services are available only to individuals who have previously been discharged from a SNF , ICF [intermediate care facility], NF or ICF/MR [intermediate care facility for mental retardation]. Activities included in this service are not primarily directed at teaching specific job skills, but at underlying habilitative goals, such as attention span and motor skills. All prevocational services will be reflected in the individual’s plan of care as directed to habilitative, rather than explicit employment objectives. -- Educational services, which consist of special education and related services as defined in sections (15) and (17) of the Individuals with Disabilities Education Act, to the extent to which they are not available under a program funded by IDEA. -- Supported employment services, which consist of paid employment for persons for whom competitive employment at or above the minimum wage is unlikely, and who, because of their disabilities, need intensive ongoing support to perform in a work setting. Supported employment is conducted in a variety of settings, particularly work sites in which persons without disabilities are employed. Supported employment includes activities needed to sustain paid work by individuals receiving waiver services, including supervision and training. When supported employment services are provided at a work site in which persons without disabilities are employed, payment will be made only for the adaptations, supervision and training required by individuals receiving waiver services as a result of their disabilities, and will not include payment for the supervisory activities rendered as a normal part of the business setting. Supported employment services furnished under the waiver are not available under a program funded by either the Rehabilitation Act of 1973 or P.L. 94-142. Homemaker: Services consisting of general household activities (meal reparation and routine household care) provided by a trained homemaker, when the individual regularly responsible for these activities is temporarily absent or unable to manage the home and care for him or herself or others in the home. Homemakers shall meet such standards of education and training as are established by the State for the provision of these activities. Personal care services: Assistance with eating, bathing, dressing, personal hygiene, activities of daily living. This service may include assistance with preparation of meals, but does not include the cost of the meals themselves. When specified in the plan of care, this service may also include such housekeeping chores as bedmaking, dusting, and vacuuming, which are incidental to the care furnished, or which are essential to the health and welfare of the individual, rather than the individual’s family. Personal care providers must meet State standards for this service. Personal Emergency Response Systems (PERS): PERS is an electronic device which enables certain individuals at high risk of institutionalization to secure help in an emergency. The individual may also wear a portable "help" button to allow for mobility.The system is connected to the person’s phone and programmed to signal a response center once a "help" button is activated. The response center is staffed by trained professionals. PERS services are limited to those individuals who live alone, or who are alone for significant parts of the day, and have no regular caregiver for extended periods of time, and who would otherwise require extensive routine supervision. Private duty nursing: Individual and continuous care (in contrast to part time or intermittent care) provided by licensed nurses within the scope of State law. These services are provided to an individual at home. Respite care: Services provided to individuals unable to care for themselves; furnished on a short-term basis because of the absence or need for relief of those persons normally providing the care. Skilled nursing: Services listed in the plan of care which are within the scope of the State’s Nurse Practice Act and are provided by a registered professional nurse, or licensed practical or vocational nurse under the supervision of a registered nurse, licensed to practice in the State. Specialized Medical Equipment and Supplies: Specialized medical equipment and supplies include devices, controls, or appliances, specified in the plan of care, which enable individuals to increase their abilities to perform activities of daily living, or to perceive, control, or communicate with the environment in which they live. This service also includes items necessary for life support, ancillary supplies and equipment necessary to the proper functioning of such items, and durable and non-durable medical equipment not available under the Medicaid State plan. Items reimbursed with waiver funds shall be in addition to any medical equipment and supplies furnished under the State plan and shall exclude those items which are not of direct medical or remedial benefit to the individual. All items shall meet applicable standards of manufacture, design, and installation. Transportation: Service offered in order to enable individuals served on the waiver to gain access to waiver and other community services, activities and resources, specified by the plan of care. This service is offered in addition to medical transportation required under 42 CFR 431.53 and transportation services under the State plan, defined at 42 440.170(a) (if applicable), and shall not replace them. Transportation services under the waiver shall be offered in accordance with individual’s plan of care.Whenever possible, family neighbors, friends, or community agencies which can provide this service without charge will be utilized. HCFA requires that each state specify licensure, certification, or other standards for each service in its waiver application. These requirements are detailed in state and local laws, regulations, or operating guidelines and enforced by state and local agencies. Such requirements may include professional standards for individuals providing services, minimum training requirements, criminal background checks, certification for facilities, local building codes, and fire and health requirements. For example, the information below shows how Florida addresses HCFA requirements for licensure, certification, and other standards for each of its waiver program services. The information, unless otherwise noted, was obtained from Florida’s Department of Health and Rehabilitative Services’ July 1995 Services Directory, which provides the details of service standards in Florida’s approved waiver. Psychologists, clinical social workers, marriage and family therapists, mental health counselors, or providers certified by the Department of Health and Rehabilitative Services (HRS) Developmental Services (DS) Behavior Analysis Certification program. Psychologists shall be licensed by the Department of Business and Professional Regulation in accordance with Chapter 490, Florida statutes (F.S.). Clinical social workers, marriage and family therapists, and mental health counselors shall be licensed in accordance with Chapter 491, F.S. Others must be certified under the HRS Behavior Analysis Certification program. Background screening is required for those certified under the HRS Developmental Services Behavior Analysis Certification program. Home health agencies, hospice agencies, and independent vendors. Home health and hospice agencies must be licensed by the Agency for Health Care Administration. In accordance with Chapter 400, Part IV or Part VI, F.S. Independent vendors are not required to be licensed or registered. Independent vendors must have at least 1 year of experience working in a medical, psychiatric, nursing, or child care setting or working with developmentally disabled persons. College or vocational/technical training, equal to 30 semester hours, 45 quarter hours, or 720 classroom hours can substitute for the required experience. Background screening required of independent vendors. Home health agencies, hospice agencies, and independent vendors. Home health and hospice agencies shall be licensed by the Agency for Health Care Administration, Chapter 400, Part IV or Part VI, F.S. Independents shall be registered with the Agency for Health Care Administration as companions or sitters in accordance with Section 400.509, F.S. Background screening required for independent vendors. Centers or sites designated by the district DS office as adult day training centers. Licensure/registration is not required. Background screening required for all direct care staff. Contractors, electricians, plumbers, carpenters, handymen, medical supply companies, and other vendors. Contractors, plumbers, and electricians will be licensed by the Department of Business and Professional Regulation in accordance with Chapter 489, F.S. Medical supply companies, carpenters, handymen, and other vendors shall hold local occupational licenses or permits in accordance with Chapter 205, F.S. None. Home health agencies, hospice agencies, and independent vendors. Home health and hospice agencies shall be licensed by the Agency for Health Care Administration in accordance with Chapter 400, Part IV or Part VI, F.S. Independent vendors must be registered as homemakers with the Agency for Health Care Administration in accordance with Section 400.509, F.S. Background screening required for independents. Independent vendors and agencies. Licensure/registration is not required. Independent vendors must have at least 1 year of experience working in a medical, psychiatric, nursing, or child care setting or in working with developmentally disabled persons. College or vocational/technical training that equals at least 30 semester hours, 45 quarter hours, or 720 classroom hours may substitute for the required experience. Agency employees providing this service must meet the same requirements. Background screening required of agency employees who perform this service and of independent vendors. Occupational therapists, occupational therapy aides, and occupational therapy assistants. Occupational therapists, aides, and assistants may provide this service as independent vendors or as employees of licensed home health or hospice agencies. Occupational therapists, occupational therapy aides, and occupational therapy assistants shall be licensed by the Department of Business and Professional Regulation in accordance with Chapter 468, Part III, F.S. and may perform services only within the scope of their licenses. Home health and hospice agencies shall be licensed by the Agency for Health Care Administration in accordance with Chapter 400, Part IV or Part VI, F.S. None. Home health and hospice agencies and independent vendors. Home health and hospice agencies shall be licensed by the Agency for Health Care Administration in accordance with Chapter 400, Part IV or Part VI, F.S. Independent vendors are not required to be licensed or registered. Independent vendors shall have at least 1 year of experience working in a medical, psychiatric, nursing, or child care setting or working with developmentally disabled persons. College or vocational/technical training that equals at least 30 semester hours, 45 quarter hours, or 720 classroom hours may substitute for the required experience. Background screening is required of independent vendors. Electrical contractors and alarm system contractors. Electrical contractors and alarm system contractors must be licensed by the Department of Business and Professional Regulation in accordance with Chapter 489, Part II, F.S. None. Physical therapist and physical therapist assistants. Physical therapist and assistants may provide this service as independent vendors or as employees of licensed home health or hospice agencies. Physical therapists and therapist assistants shall be licensed by the Department of Business and Professional Regulation in accordance with Chapter 486, F.S., and may perform services only within the scope of their licenses. Home health and hospice agencies shall be licensed by the Agency for Health Care Administration in accordance with Chapter 400, Part IV or Part VI, F.S. None. Registered nurses and licensed practical nurses. Nurses may provide this service as independent vendors or as employees of licensed home health or hospice agencies. Nurses shall be registered or licensed by the Department of Business and Professional Regulation in accordance with Chapter 464, F.S. Home health or hospice agencies shall be licensed by the Agency for Health Care Administration in accordance with Chapter 400, Part IV or Part VI, F.S. None. Psychologists. Psychologists shall be licensed by the Department of Business and Professional Regulation, Chapter 490, F.S. None. Group homes, foster homes, and adult congregate living facilities and independent vendors. Group and foster homes facilities shall be licensed by the Department of Health and Rehabilitative Services in accordance with Chapter 393, F.S. Adult congregate living facilities shall be licensed by the Agency for Health Care Administration in accordance with Chapter 400, Part III, F.S. Licensure or registration is not required for independent vendors. Independent vendors must possess at least an associate’s degree from an accredited college with a major in nursing; education; or a social, behavioral, or rehabilitative science. Experience in one of these fields shall substitute on a year-for-year basis for required education. Background screening required of direct care staff employed by licensed residential facilities and independent vendors. Group homes; foster homes; adult congregate living facilities; home health agencies; hospice agencies; other agencies that specialize in serving persons who have a developmental disability; and independent vendors, registered nurses, and licensed practical nurses. Group and foster homes shall be licensed by the Department of Health and Rehabilitative Services in accordance with Chapter 393, F.S. Adult congregate living facilities shall be licensed by the Agency for Health Care Administration in accordance with Chapter 400, Part III, F.S. Home health and hospice agencies shall be licensed by the Agency for Health Care Administration in accordance with Chapter 400, Part IV or Part VI, F.S. Nurses who render the service as independent vendors shall be licensed or registered by the Department of Business and Professional Regulation in accordance with Chapter 464, F.S. Licensure or registration is not required for independent vendors who are not nurses. Background screening is required of direct care staff employed by licensed residential facilities and other agencies that serve persons who have a developmental disability and of independent vendors who are not registered or licensed practical nurses. Independent vendors who are not nurses must have at least 1 year of experience working in a medical, psychiatric, nursing, or child care setting or working with developmentally disabled persons. College or vocational/technical training that equals at least 30 semester hours, 45 quarter hours, or 720 classroom hours may substitute for the required experience. Registered nurses and licensed practical nurses. Nurses may provide this service as independent vendors or as employees of licensed home health or hospice agencies. Nurses shall be registered or licensed by the Department of Business and Professional Regulation in accordance with Chapter 464, F.S. Home health and hospice agencies shall be licensed by the Agency for Health Care Administration in accordance with Chapter 400, Part IV or Part VI, F.S. None. Group homes that employ registered nurses, licensed practical nurses, or licensed nurse aides. Group homes shall be licensed by the Department of Health and Rehabilitative Services in accordance with Chapter 393, F.S. Nurses shall be registered or licensed by the Department of Business and Professional Regulation in accordance with Chapter 464, F.S. and may perform services only within the scope of their license or registration. Background screening required of direct care staff employed by licensed group homes. (See Florida’s approved waiver renewal application for 1993-98.) Medical supply companies, licensed pharmacies, and independent vendors. Pharmacies must be licensed by the Department of Business and Professional Regulation in accordance with Chapter 465, F.S. Medical supply companies and independent vendors must be licensed under Chapter 205, F.S. None. Speech-language pathologists and speech-language pathology assistants. Speech-language pathologists or assistants may provide this service as independent vendors or as employees of licensed home health or hospice agencies. Speech-language pathologists and pathology assistant shall be licensed by the Department of Business and Professional Regulation in accordance with Chapter 468, Part I, F.S. Home health and hospice agencies shall be licensed by the Agency for Health Care Administration in accordance with Chapter 400, Part IV or Part VI, F.S. None. Single practitioner vendors or agency vendors. Licensure is not required. Single practitioners and support coordinators employed by agencies shall have a bachelor’s degree from an accredited college or university and 2 years of professional experience in mental health, counseling, social work, guidance, or health and rehabilitative programs. A master’s degree shall substitute for 1 year of the required experience. Providers (single practitioners and agency directors/managers) are required to complete statewide training conducted by the Developmental Services Program Office, as well as district-specific training conducted by the district DS office. Support coordinators employed by agencies are also required to be trained on the same topics covered in the statewide and district-specific training; however, this training may be conducted by the support coordination agency if approved by the district and the agency trainer meets specific requirements described in Chapter 10F-13, Florida Administrative Code. Independent vendors and agency vendors. Licensure is not required. Independent vendors and employees of agencies who render this service shall have a bachelor’s degree from an accredited college or university with a major in nursing; education; or a social, behavioral, or rehabilitative science or shall have an associate’s degree from an accredited college or university with a major in nursing; education; or a social, behavioral, or rehabilitative science and 2 years of experience. Experience in one of these fields shall substitute on a year-for-year basis for the required college education. Agency employees are required to attend at least 12 hours of preservice training and independent vendors must attend at least one supported living-related conference or workshop before certification. All providers and employees are also required to attend human immunodeficiency virus/acquired immunodeficiency syndrome (HIV/AIDS) training. Background screening is required. Independent vendors and commercial transportation agencies. Providers shall hold applicable licenses issued by the Department of Highway Safety and Motor Vehicles and shall secure appropriate insurance. Proof of license and insurance shall be provided to the district DS office. Background screening required for independent vendors. Medicaid Long-Term Care: State Use of Assessment Instruments in Care Planning (GAO/PEMD-96-4, Apr. 2, 1996). Long-Term Care: Current Issues and Future Directions (GAO/HEHS-95-109, Apr. 13, 1995). Medicaid: Spending Pressures Drive States Toward Program Reinvention (GAO/HEHS-95-122, Apr. 4, 1995). Long-Term Care: Diverse, Growing Population Includes Millions of Americans of All Ages (GAO/HEHS-95-26, Nov. 7, 1994). Long-Term Care Reform: States’ Views on Key Elements of Well-Designed Programs for the Elderly (GAO/HEHS-94-227, Sept. 6, 1994). Long-Term Care: Other Countries Tighten Budgets While Seeking Better Access (GAO/HEHS-94-154, Aug. 30, 1994). Financial Management: Oversight of Small Facilities for the Mentally Retarded and Developmentally Disabled (GAO/AIMD-94-152, Aug. 12, 1994). Medicaid Long-Term Care: Successful State Efforts to Expand Home Services While Limiting Costs (GAO/HEHS-94-167, Aug. 11. 1994). Long-Term Care: Status of Quality Assurance and Measurement in Home and Community Based Services (GAO/PEMD-94-19, Mar. 31, 1994). Long-Term Care: Support for Elder Care Could Benefit the Government Workplace and the Elderly (GAO/HEHS-94-64, Mar. 4, 1994). Long-Term Care: Private Sector Elder Care Could Yield Multiple Benefits (GAO/HEHS-94-60, Jan. 31, 1994). Health Care Reform: Supplemental and Long-Term Care Insurance (GAO/T-HRD-94-58, Nov. 9, 1993). Long-Term Care Reform: Rethinking Service Delivery, Accountability, and Cost Control (GAO/HRD-93-1-SP, July 13, 1993). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed states' experiences in utilizing the Medicaid waiver program to provide care for developmentally disabled adults in alternative settings, focusing on: (1) expanding state use of the waiver program; (2) controlling long-term care costs for developmentally disabled individuals; and (3) the strengths and limitations in states' quality assurance approaches in community settings. GAO found that: (1) based on national data and three case studies, states' use of the waiver program has changed long-term care for developmental disabled persons by providing such persons with a broader range of services that they and their families prefer; (2) the waiver program has increased the number of persons served and the use of group home settings while allowing states to close many institutional care facilities and to expand services to persons in state-financed programs; (3) states now serve more developmentally disabled persons through the waiver program than the institutional program; (4) the waiver program has allowed states to pursue distinct strategies and achieve different program results; (5) from 1990 to 1995, Medicaid costs for long-term care for developmentally disabled persons increased an average of 9 percent annually due to increased costs for waiver and institutional program services, but per capita costs and cost increases varied by state; (6) the cap on the number of program recipients and state management practices helped contain these costs; (7) changes in the Health Care Financing Administration's (HCFA) process for setting waiver program caps could increase program costs, but HCFA believes that state budget constraints could limit program growth; and (8) although states are changing their quality assurance procedures for waiver program services, such as customizing quality assurance to individual circumstances, more needs to be done to improve quality oversight mechanisms and reduce participants' risk as these mechanisms evolve. |
Assistance Available under Major Disaster Declarations A major disaster declaration makes available a w ide range of federal assistance programs for individuals and public infrastructure, including funds for both emergency and longer-term repairs. Not all programs are activated for every disaster. The determination of w hich programs are authorized is based on the types of assistance specified in the governor’s request and the needs identified during assessments of the affected area to determine the extent of the disaster. State and local governments are primarily responsible for preparing for and responding to natural and manmade disasters. However, if these governments need assistance, the federal government can provide critical support. The Robert T. Stafford Disaster Relief and Emergency Assistance Act, as amended (Stafford Act) outlines the process state and local governments can use to obtain federal support under the act in response to a disaster. First, a governor must submit a request to the President to declare a major disaster or emergency. Once the declaration is granted, the state becomes eligible for various types of assistance from FEMA, such as personnel, funding, and technical assistance, among others (see sidebar). The Department of Homeland Security, of which FEMA is a component, developed the National Response Framework in 2008 to define the roles of federal, state, and local governments; the private sector; and voluntary organizations, such as the American Red Cross (Red Cross) and faith- based organizations, in response to all types of incidents, including disasters. The framework is designed around the principle that response efforts must adapt to meet evolving requirements resulting from changes in disaster size, scope, and complexity. Under the framework, state and local agencies are primarily responsible for response and recovery activities in their locations, including those involving health and safety. For example, state and local agencies are responsible for carrying out evacuations and administering shelters, when necessary, for those affected by a disaster. When an incident occurs that exceeds or is expected to exceed state or local resources, the federal government may use the National Response Framework to organize the federal response by involving all necessary department and agency capabilities and ensuring coordination with response partners. In the case of a federally declared disaster under the Stafford Act, FEMA has primary responsibility for coordinating the federal response, and it targets the level of federal support to the needs specified by states’ requests for assistance. FEMA works with Red Cross and a number of federal agencies— including the Department of Health and Human Services and the Department of Agriculture—as well as voluntary organizations to provide support in the area of mass care, which includes such services as sheltering, reunification of families, and counseling. In addition, FEMA coordinates with voluntary organizations to ensure immediate needs that are beyond the scope of traditional mass care services are addressed. For example, this may include providing replacement mobility or communication aids to individuals with disabilities. While the nature and complexity of a disaster and state requests for assistance determine the extent of support FEMA provides under the Stafford Act, agency staff are often deployed to the sites of major disasters to assist state and local emergency managers with response and recovery activities. The first FEMA team to deploy is called the Incident Management Assistance Team (IMAT). When the President declares a major disaster, the IMAT, made up of FEMA emergency management staff in areas such as operations, logistics, planning, and finance and administration, travels to the site of the disaster within 12 hours. The IMAT helps identify what federal support may be required and may support first responders in providing shelter (including for individuals with access and functional needs), emergency food, and supplies, as well as in restoring government services. FEMA staff deployed to disasters work alongside state counterparts and voluntary organizations at a centralized location, called the joint field office, to facilitate coordination of disaster response and recovery efforts. Staffing at the joint field office depends on the requirements of the disaster but may include the following FEMA offices. Individual Assistance: Staff from this office are generally responsible for providing survivors with information about who may be eligible for assistance and help them complete the application materials. Staff may provide this information at survivors’ homes or temporary residences as part of Disaster Survivor Assistance Teams or at Disaster Recovery Centers, which are readily accessible facilities or mobile offices where disaster survivors can go for information about FEMA financial assistance programs. One set of staff—Voluntary Agency Liaisons—gathers information about the capacity of voluntary organizations to provide assistance to all disaster survivors, including individuals with access and functional needs, and coordinate the activities of the organizations and FEMA. Office of Disability Integration and Coordination (ODIC): Staff may be deployed from FEMA headquarters or regional offices as part of the Disability Integration Cadre and may include positions such as American Sign Language interpreters. Staff are deployed to provide assistance to federal, state, and local emergency managers on physical, programmatic, and communication accessibility issues. For example, staff may provide guidance and technical assistance to Individual Assistance staff to ensure that Disaster Recovery Centers are physically accessible and provide disaster-related information, such as requirements about qualifying for FEMA financial assistance, in accessible formats, such as sign language interpretation. Office of External Affairs (External Affairs): These staff are generally responsible for working with state and local staff, such as those from emergency management departments, to communicate disaster-related information. For example, they may provide guidance on how to apply for FEMA Individual Assistance in the languages spoken within the affected area. In addition to onsite disaster response and recovery, staff in FEMA’s 10 regional offices conduct ongoing efforts to support individuals with access and functional needs that may be affected in future disasters. Regional Administrators in each of the regional offices report directly to the FEMA Administrator and are responsible for the day-to-day management and administration of regional activities and staff. FEMA has assigned one staff person at each of the regional offices to focus on disability integration. Disability integration includes integrating recommended practices and applicable requirements related to individuals with disabilities—such as requirements that may apply under the Rehabilitation Act of 1973 or the Americans with Disabilities Act of 1990 (ADA), both as amended—into all aspects of emergency preparedness and disaster response, recovery, and mitigation. Staff do this by promoting inclusive practices in FEMA regional activities and through outreach to local emergency managers. Inclusive practices are efforts to ensure people with disabilities have equal opportunities to participate in, and receive the benefits of, emergency programs and services. Such practices could include involving people with disabilities in emergency evacuation planning, ensuring that shelters are physically accessible, and providing guidance on post-evacuation residency for individuals with disabilities. Regional disability integration staff are also generally expected to track information about service and support shortfalls and about the demographics in each of the FEMA regions, such as local and state statistics on individuals who are deaf or hard of hearing, to help keep track of what FEMA should be prepared to address in the event of a disaster. In addition, FEMA coordinates closely with voluntary organizations to increase their capacity. Voluntary Agency Liaisons working in the regional offices are tasked with building relationships among federal and state governments and voluntary organizations, including national, state, and local voluntary organizations active in disaster. As a part of these efforts, they may also work with local organizations such as schools or churches that are not typically involved in disaster assistance but that may provide spontaneous services in response to disasters (see fig. 2). The Post-Katrina Act set forth specific provisions requiring FEMA to take actions to improve disaster assistance for individuals with access and functional needs. Several of the provisions identify activities related to individuals with disabilities, individuals with limited English proficiency, and children. These provisions include, among others: Disability Coordinator (§ 611): The FEMA Administrator is required to appoint a Disability Coordinator to help ensure that the needs of individuals with disabilities are being properly addressed in emergency preparedness and disaster relief. The Disability Coordinator is charged with consulting with government agencies, such as the National Council on Disability, and organizations that represent the interests and rights of individuals with disabilities, regarding their needs. Other responsibilities of the Disability Coordinator include ensuring coordination and dissemination of best practices; developing training materials for emergency managers; promoting accessibility of phone hotlines, websites, and video broadcasts regarding emergency information; and providing guidance and implementing policies to ensure the rights of individuals with disabilities regarding post-evacuation residency are respected. Accommodations Guidelines for People with Disabilities (§ 689(a)): The FEMA Administrator is required, in coordination with the Disability Coordinator and other government entities, including the National Council on Disability, to develop guidelines to accommodate individuals with disabilities regarding (1) the accessibility of, and communications and programs in, shelters, recovery centers, and other facilities; and (2) devices used in disaster operations, including first aid stations, mass feeding areas, portable payphone stations, portable toilets, and temporary housing. Disaster-Related Information Services (§ 689e): The FEMA Administrator is required, in coordination with state and local governments, to identify population groups with limited English proficiency and take them into account when planning for a major disaster. The Administrator must ensure that information is made available in formats that can be understood by individuals with limited English proficiency and with disabilities. The Administrator is required to develop and maintain an informational clearinghouse of model language assistance programs and best practices for state and local governments in providing services related to a major disaster. Child Reunification (§ 689b(b)): The FEMA Administrator is required to establish, in coordination with the Attorney General, the National Emergency Child Locator Center within the National Center for Missing and Exploited Children, and establish procedures to make all relevant information available to the Child Locator Center in a timely manner to facilitate reunification of children and families who have been displaced as a result of a major disaster. Since the Post-Katrina Act was enacted, FEMA has promoted inclusive practices for assisting disaster survivors with disabilities in several ways. To comply with the Post-Katrina Act requirement to appoint a Disability Coordinator, FEMA created the Office of Disability Integration and Coordination (ODIC) in 2010 to increase attention to disaster assistance that considers the needs of people with disabilities across FEMA and among groups responding to disasters, such as state and local emergency managers. ODIC officials said that while the role of the Disability Coordinator under the Post-Katrina Act is to ensure that the needs of individuals with disabilities are properly addressed in disaster relief, FEMA has no enforcement authority with state and local agencies. Therefore, they said that ODIC and disability integration staff in FEMA’s regional offices focus on providing guidance to and developing relationships with other FEMA divisions, such as Individual Assistance; other federal agencies; state and local governments; and public, private, and faith-based voluntary organizations. FEMA’s guidance and relationships are focused on building greater awareness of what is required to adequately serve people with disabilities impacted by disasters, such as requirements that may apply under the ADA. As of August 2016, ODIC reported approximately 85 disability integration staff in headquarters and the regions and anticipated increasing the staff to 285 by calendar year 2018 (see fig. 3). Disability integration staff we spoke with in FEMA headquarters and in five of the six regional offices where the selected disasters occurred said they work with staff in other FEMA divisions, such as Individual Assistance, to ensure that FEMA programs and facilities are accessible to individuals with disabilities and to develop guidance for state and local emergency managers. For example, ODIC officials told us they helped FEMA’s Individual Assistance division develop guidance in 2010 for state and local emergency managers and shelter operators, Guidance on Planning for Integration of Functional Needs Support Services in General Population Shelters. This document includes, for example, guidance on equipping shelters with durable medical supplies, communication devices, and assistive technology, such as wheelchairs, accessible cots, hearing aids, teletypewriter phones, and screen readers. ODIC officials said they have contributed to several other key documents, including the National Response Framework and its Mass Evacuation Incident Annex, to help integrate the needs of people with disabilities in emergency management. ODIC officials said they also worked closely with FEMA’s Exercise Branch to plan and participate in emergency preparation exercises. For example, during FEMA’s most recent exercise, a disability integration representative was tasked with assessing how well exercise participants across the agency integrated disability-related considerations in their response efforts. In the five disasters we reviewed at which disability integration staff were deployed, these staff helped assess potential Disaster Recovery Center locations to ensure they were accessible to individuals with disabilities. This included the physical accessibility of parking lots, entrances, water fountains, and bathrooms, as well as equipping these centers with items to ensure communication accessibility, such as assistive listening devices, documents in braille, and tablet computers that connected to remote American Sign Language translation services. Disability integration staff we spoke with also reported taking a number of steps to engage external stakeholders, including those representing localities, states, other federal agencies, and voluntary organizations, on strategies to assist people with disabilities impacted by disasters. For example: FEMA disability integration staff in California said they worked with state officials, along with other FEMA staff, to provide training to local disability partners on the role of long-term recovery groups and identified grant opportunities for these partners to strengthen their ability to provide recovery support services following disasters. According to ODIC officials, it is standard practice for disability integration staff to create strategies for each disaster to which they deploy to guide their activities, including their coordination with state and local stakeholders. ODIC chairs the Interagency Coordinating Council on Emergency Preparedness and Individuals with Disabilities (council), which was created in 2004 specifically to strengthen emergency preparedness for individuals with disabilities. The council’s membership represents 25 federal agencies and, according to ODIC officials discusses, among other topics, how interagency partners can engage in preparing for, responding to, recovering from, and mitigating all hazards inclusive of people with disabilities and others with access and functional needs. For example, ODIC officials said that one council meeting included a discussion about integrating the needs of people with disabilities into an active shooter training curriculum. ODIC facilitated the development of memorandums of agreement between FEMA and several nongovernmental stakeholders to better support these organizations’ efforts to assist disaster survivors with disabilities. For example, FEMA and one national disability advocacy organization agreed to participate in disaster exercises together and share best practices for delivering services to people with disabilities. Despite the progress made by FEMA to promote practices to address the needs of individuals with disabilities, these efforts may be hindered because, according to ODIC officials, there is no established procedure for involving ODIC in certain efforts related to regional disability integration staff. ODIC officials said that FEMA’s Regional Administrators—not ODIC—determine the reporting chain for these staff, and regions vary in the extent to which they consult with ODIC on disability integration issues. We identified three areas where FEMA’s lack of established procedures for involving ODIC may result in challenges pursuing common goals and effectively sharing and leveraging information to achieve those goals: Role of regional disability integration staff. According to ODIC officials, FEMA’s disability integration efforts are focused on supporting access to disaster-related facilities, programs, and communication for individuals with disabilities. However, ODIC officials told us that some Regional Administrators do not involve ODIC when determining the duties of regional disability integration staff and may not assign these staff to certain key duties. For example, disability integration staff in some regions may be assigned to focus on recovery activities, such as working with community partners to ensure that individuals with disabilities are included in local long-term recovery group efforts. However, according to ODIC, these staff may have little involvement in promoting public communication that is accessible to individuals with disabilities following disasters. In addition, ODIC officials said that regions do not consistently include a position for disability integration staff on their IMATs, which are the first FEMA teams to deploy to disaster sites. ODIC officials said that regional IMATs that do not include this position may not have adequate knowledge about guidelines for accessibility for emergency alerts, evacuation processes, and shelters, hindering the ability to plan for the needs of the whole community at the onset of FEMA’s disaster response. Moreover, according to ODIC officials, disability integration staff have misunderstood their roles and responsibilities during disasters. ODIC officials said they have clarified these roles and responsibilities through a FEMA guidance document they issued in December 2016, which they said may help resolve these misunderstandings. Performance goals for regional disability integration staff. ODIC officials said they provided suggested performance goals for regional disability integration staff to these staff’s supervisors in the regions, which included goals related to community engagement, information collection, and coordination with ODIC. ODIC officials told us that FEMA’s Deputy Administrator and Chief of Staff directed Regional Administrators to include ODIC when setting performance goals and conducting assessments of regional disability integration staff. Nevertheless, according to ODIC officials, there are no established procedures for how Regional Administrators are to involve ODIC, and while some regions engage ODIC in this performance management role, others do not. As a result, ODIC may be unaware of regional disability integration staff who are not performing well and may need additional guidance, thereby decreasing the effectiveness of disaster services for people with disabilities. Communication among disability integration staff. ODIC hosts weekly phone calls with regional disability integration staff to build cohesion and discuss promising practices and progress on challenges. However, ODIC officials reported that in the first 6 months of 2016, 2 of the 10 regions repeatedly missed weekly calls with ODIC. In addition, 2 other regions were not represented on the calls for a majority of that time because they did not have disability integration staff. ODIC officials in headquarters said they do not directly oversee disability integration staff in the regions so they cannot require these staff to participate in the weekly calls. Additionally, they said regions with vacancies for key disability integration positions may not have other staff with the expertise to coordinate disability integration activities and report these activities at weekly ODIC meetings. As a result of this inconsistent communication, ODIC officials may not receive information on disability integration activities from all regions, which may affect their ability to oversee and assess these activities. Additionally, regional disability integration staff who do not consistently participate in these calls may lose opportunities to gain important information from each other and from ODIC to use in their disaster assistance efforts. ODIC officials and stakeholders shared several examples of the effect of the lack of procedures for involving ODIC in regional disability integration activities on FEMA’s ability to provide informative, responsive support to stakeholders and to leverage existing relationships with partners. For example, ODIC officials said that at times they were unaware that regional disability integration staff were invited to represent FEMA at national conferences, despite reminders that such invitations were required to be cleared through ODIC. Officials said they later learned that these staff provided inaccurate information to state and local emergency managers. Emergency managers with inaccurate information may be less effective at integrating the needs of people with disabilities in their disaster preparedness efforts. Our prior work on other aspects of FEMA’s coordination between headquarters and regional offices has identified similar challenges, including limitations in FEMA’s ability to coordinate monitoring activities for certain grant programs and cohesively manage its regional contracting workforce. Internal control standards state that, in establishing an organizational structure, agencies should consider how their components interact and define reporting lines that allow the components to communicate information necessary to fulfill their respective responsibilities. Local agencies and voluntary organizations involved in the six disasters we reviewed served central roles in responding to the needs of residents impacted by the disasters, including people with disabilities. For example: Communicating information. In South Carolina, local officials and FEMA disability integration staff said they both provided American Sign Language interpreters to communicate disaster-related information to people who are deaf or hard of hearing. Local officials in California similarly said they made these services available. In California, local officials in one county told us they maintained a list of elderly residents receiving in-home health support services and called every person on the list during the September 2015 wildfires to see if they needed assistance evacuating their homes. Providing shelter. Local officials and representatives of voluntary organizations we spoke with said they provided or supported shelters for those affected by the disasters, including those with disabilities. In the three disaster areas we visited, local officials and representatives of voluntary organizations said shelter facilities were generally compliant with ADA requirements or met other accessibility guidelines, as applicable. Assisting with daily living activities. According to representatives of voluntary organizations, local officials, and FEMA regional staff in the three disaster areas we visited, voluntary organizations helped individuals with disabilities maintain independence with their daily living activities, such as by replacing durable medical equipment that was lost or damaged. For example, local officials and FEMA regional staff in Texas said that voluntary organizations helped provide individuals with oxygen tanks, wheelchairs, walkers, motorized scooters, communication devices, glasses, and accommodated their needs for personal assistance services. State officials in the three disaster areas we visited said they supported local and voluntary organizations’ response efforts with strategic initiatives designed to address the needs of individuals with disabilities. For example, state officials from Texas told us that the state established a Disability Task Force on Emergency Management. Convened in 2011, the Task Force helped produce guidance to assist local emergency managers and shelter planners in understanding certain requirements and best practices for assisting disaster survivors with disabilities. The guidance includes tips on interacting with and assisting people who use service dogs, canes, crutches, wheelchairs, or have mental illnesses. In recognition of the central role that localities, voluntary organizations, and states serve in disaster assistance activities, FEMA has developed guidance and training to increase their awareness of requirements and best practices for including individuals with disabilities in emergency planning. Specifically, ODIC officials said that in 2013 FEMA began offering a key course—Integrating Access and Functional Needs into Emergency Planning—to state and local emergency managers and other stakeholders. Participants can take the 2-day course free-of-charge at FEMA’s Emergency Management Institute in Maryland or, as FEMA resources allow, from FEMA instructors from ODIC who travel to locations across the country to deliver the course. The course includes substantial information on incorporating the needs of people with disabilities in emergency planning, including information that could have helped address certain challenges we heard about from local officials. For instance, during the September 2015 wildfires in California, local officials said that staffing constraints and a lack of onsite clinical expertise prompted them to redirect individuals with disabilities from general population shelters to hospitals or other facilities where officials felt they could be more safely accommodated. One module of FEMA’s training addresses the importance of accommodating individuals with disabilities in general population shelters unless they have acute medical needs. Similarly, disability advocates caution against redirecting individuals with disabilities to medical facilities for shelter during disasters as this practice can have negative impacts on these individuals’ personal independence. In another example, local officials in one Texas county described challenges communicating with the deaf population following the 2015 flooding and noted a recent effort to add video capabilities to their 9-1-1 system to reach such populations. FEMA’s training also covers practical tips on how to ensure access to communication about disaster assistance, including through video relay services. ODIC officials said that all state and local emergency managers, public health officials, community leaders, and other stakeholders would benefit from the training, but officials do not have specific goals or milestones in place for the number who receive it. ODIC officials estimated that at least 843 federal, state, and local emergency managers participated in the training from June 2013 through July 2016. However, given the number of local officials who play a role in disaster response, many state and local emergency managers have yet to take the training. Officials said they have not gathered information about how many additional emergency managers and other local officials could benefit from this training and have not set goals for future participation. Further, ODIC faces challenges in satisfying current requests for training, according to officials. Some state and local officials have not taken the course because of the cost of traveling to Maryland to access it in person and have requested the training be offered locally. As of July 2016, course instructors from ODIC have delivered the training locally in 22 states and in the District of Columbia. However, ODIC officials reported that many of these states have requested additional training in other locations within the state and ODIC has been unable to fulfill these requests. ODIC officials said they have limited travel funding to deliver the training onsite for all who request it and that FEMA cannot legally accept funding from states for travel expenses for its employees to conduct the course. ODIC officials also said they rely on a small pool of staff— including the ODIC director—to teach the course. ODIC staff said they recently began to consider alternative training delivery methods, such as a virtual course to supplement the in-person course in Maryland. However, they have not evaluated such alternatives and, as yet have no specific plans to offer the course using an alternative method. We have previously reported that agencies should consider learning that provides trainees with the flexibility to choose among different training delivery methods (such as web-based and instructor-led) while leveraging resources in the most efficient way possible. We also previously recommended that FEMA establish and use goals, milestones, and performance measures to monitor the performance of its programs designed to help state and local stakeholders determine their readiness to respond to disasters. Leading practices identified in the Program Management Institute’s The Standard for Program Management call for agencies to develop meaningful measures to monitor program performance and to track the accomplishment of the program’s goals and objectives. Without collecting information about the potential pool of participants, identifying goals for the number of state and local officials who receive the training, and providing the training in the alternative formats needed to meet the goals, ODIC may be missing opportunities to ensure its training reaches a sufficiently wide audience. Further, state and local officials who have a central role in assisting people with disabilities impacted by disasters may lack awareness of how to adequately serve this population. FEMA provides information on its programs to individuals with limited English proficiency through written and oral translations in other languages and through additional resources, such as a coordinator and support team focused on limited English proficiency. As a general approach, FEMA officials said they identify languages spoken in each state through data from the Census Bureau. FEMA’s External Affairs staff maintain these data to use when an area is impacted by a disaster. FEMA officials said that these data represent an initial language assessment to determine the language needs of the affected population. After a disaster is declared, officials said they continue to refine their understanding of the language needs in the area by collecting additional demographic information from state and local officials, volunteer organizations, and disaster survivors. For example, the Census data that FEMA used in 2015 indicated that 170 languages were spoken in Texas. Following that state’s flooding, FEMA officials said they reached out to officials in the most affected areas and identified Spanish and Vietnamese as the languages that were most commonly spoken in the area. FEMA maintains many previously-translated documents in commonly encountered languages in an agency library that it can readily access when disaster occurs. According to the plan that summarizes how FEMA ensures disaster assistance communications are accessible to those with limited English proficiency, the online library contains news releases, flyers, and other written information in more than 20 languages that FEMA produced following Hurricane Sandy in 2011 (see sidebar). Such translated documents may include Individual Assistance program brochures explaining the registration process and correspondence about the status of individual applications, disaster preparedness and recovery activities, and press releases with pertinent disaster-related information. In all five disasters we reviewed at which FEMA provided Individual Assistance, officials told us FEMA provided printed materials, such as those about FEMA financial assistance, in multiple languages at locations such as the Disaster Recovery Centers. The results of the language assessment process also help inform FEMA decisions about the number of staff proficient in multiple languages to deploy to the disaster-affected location. FEMA deploys these staff to explain the benefits and registration process for FEMA programs to individuals who are not proficient in English. External Affairs can also deploy members of a support team to assist regional office staff with new strategies to reach limited English proficiency populations affected by a disaster and can initiate contracts for additional translation services. For example, during the 2015 wildfires in California, External Affairs both deployed a support team member and contracted with a translation service to assist with media communications to Spanish speakers. To identify survivors’ preferred languages, staff can use aids produced by FEMA and the Office for Civil Rights and Civil Liberties (another component of the Department of Homeland Security), such as language identification guides (see fig. 4). If staff proficient in a specific language are not available at Disaster Recovery Centers, individual survivors can call FEMA’s helpline, which provides translation services. The toll-free number is posted online and on FEMA documents. These translation services are offered through FEMA’s National Processing Service Centers. According to FEMA’s Draft Language Access Plan, FEMA can provide disaster assistance information in more than 50 languages. FEMA publicizes these services through flyers, news releases, outreach to the community, and other promotion efforts. In addition, throughout a disaster, staff from these service centers and External Affairs continuously share information on the language needs of a location, officials said, and may adjust their resources based on changes in language requests. In four of the five disasters we reviewed where FEMA provided Individual Assistance, FEMA deployed bilingual staff to work in Disaster Recovery Centers, as part of Disaster Survivor Assistance Teams, or as Voluntary Agency Liaisons. For example, following the 2015 flooding in Oklahoma, officials told us that while visiting a community that primarily speaks Spanish, Disaster Survivor Assistance Team members proficient in Spanish were able to communicate information about FEMA assistance to survivors. However, state officials in Texas told us that while FEMA provided some bilingual staff following the 2015 Memorial Day flooding, those resources were inadequate in some areas, such as along the Texas border. As a result, the officials said that voluntary organizations assisting in the recovery provided translation services to fill this gap. According to regional FEMA officials, FEMA deployed 30 Spanish interpreters to Disaster Recovery Centers and Disaster Survivor Assistance Teams in Texas following the 2015 flooding. They also noted that while FEMA initially could not provide sufficient resources to fulfill requests for these personnel, additional bilingual staff were deployed within 3 weeks of the disaster declaration. Individuals affected by the flooding were able to apply for FEMA assistance until approximately 4 months after the date of declaration. FEMA also engages in activities during non-disaster times to ensure access to disaster-related information for individuals with limited English proficiency. For example, FEMA officials said External Affairs has established two positions to work with stakeholders before a disaster occurs: a Limited English Proficiency coordinator and a media relations specialist. The coordinator is responsible for ensuring access to FEMA program information for individuals with limited English proficiency, such as by ensuring that FEMA’s website includes translated information. The coordinator also conducts strategic planning and leverages relationships with national and local organizations to broaden FEMA’s reach into communities with limited English proficiency. The media relations specialist focuses specifically on building relationships with Spanish- speaking media. In addition, FEMA offers training to FEMA staff and state and local emergency managers that includes elements intended to ensure access to information for individuals with limited English proficiency. FEMA requires all of its employees to complete civil rights training, which includes a module on the federal laws that prohibit discrimination and require meaningful access to services for this population, how to identify language needs, available multilingual FEMA resources, and different strategies for conducting outreach in communities. FEMA also developed a series of trainings designed for state and local officials—which is offered in a variety of formats, including at FEMA’s Emergency Management Institute and online—on effective approaches to disseminating information publicly following disasters. For example, objectives of one course include identifying critical audiences and learning to recognize the functional needs and challenges of different audiences. FEMA officials noted that all of these efforts, including the language assessment, translated materials, oral interpretation services, and trainings, represent FEMA’s efforts to share best practices with state and local emergency managers on providing services to individuals with limited English proficiency, to comply with the Post-Katrina Act. Rather than maintaining an informational clearinghouse in the form of a central website or database that may become quickly outdated, officials told us that these efforts allow them to tailor FEMA assistance for disaster survivors to the unique resources available and needs of each location affected by a disaster. State, local, or voluntary organizations in all six disasters we reviewed disseminated information on evacuations and shelters using a range of communication methods and translation services to assist individuals with limited English proficiency in accessing resources. For example, officials said that local agencies communicated through hotlines and reverse 9-1- 1 calls to relay evacuation and shelter information to the general public. As described in the National Response Framework, state and local agencies are generally responsible for distributing health and safety information—including information related to evacuations and shelters. We found that state emergency planning documentation for all six disasters we reviewed assigned responsibility to state and local agencies for disseminating this type of information. To provide access to such information specifically to individuals with limited English proficiency, state, local, or voluntary organizations in five of the six disasters we reviewed reported providing a range of translation services. State or local agencies in two of the disasters had bilingual staff members available, and state or voluntary organizations in three other disasters provided access to interpreters by phone. In three of the six disasters, state or local agencies relied on support from voluntary organizations to provide interpreters for those who were not English proficient. For example, during the 2015 snowstorms in Massachusetts, state officials said that a voluntary organization’s call center provided information, such as shelter locations, in multiple languages. State and local officials in two disasters relied on local churches to provide information to their members or other Spanish-speaking residents. For example, officials in Oklahoma said they relied on churches in a large Spanish-speaking community in Oklahoma City to spread information on recovery resources following flooding in 2015. State and local agencies can also leverage FEMA resources to translate or distribute recovery information. According to FEMA officials, state and local emergency managers can obtain the results of FEMA’s language assessments, which include information about populations with limited English proficiency, to better target resources to needs. In addition, FEMA officials said they fulfilled requests from California officials to translate state documents into Spanish following the 2015 wildfires. Similarly, following the 2015 flooding in South Carolina, state officials said they requested help from FEMA to print information about registering for FEMA assistance in English and Spanish on fans, which were then distributed to flood victims in churches (see fig. 5). In addition to these efforts to reach individuals with limited English proficiency during disasters, local officials in two of the disasters we reviewed said limited resources prevented them from translating disaster information into other languages. To address this problem, local officials may seek assistance from external resources. For example, local officials in Texas said that although emergency communications are not currently translated into other languages, they are considering partnering with private organizations to facilitate translations when needed. According to FEMA’s Draft Language Access Plan and External Affairs officials, FEMA can also provide support in the form of translating disaster materials into the languages spoken by those affected by disasters. FEMA has worked to develop resources to reunite children separated from their families during disasters, but it has not had to activate them in an actual disaster since they were developed. Following the need for reuniting children with their families and for other assistance for children highlighted during Hurricane Katrina, the Post-Katrina Act required that FEMA establish the National Emergency Child Locator Center, a child reunification call center operated by the National Center for Missing and Exploited Children (NCMEC). The first step in establishing this center was to enhance information- sharing during federally declared disasters, which FEMA officials said they did with NCMEC, Department of Justice, Federal Bureau of Investigation, and Red Cross through a memorandum of understanding signed in 2007. Following a disaster declaration, FEMA may request that NCMEC activate the National Emergency Child Locator Center and then refer child-specific inquiries to NCMEC. NCMEC uses a call center as part of its day-to-day operations, but following a disaster, it can increase staffing to meet anticipated call volume. Following a disaster, call center staff would work with law enforcement, NCMEC staff, counterparts at FEMA, and Red Cross to learn the ground-level context of a disaster to more effectively field calls. NCMEC also maintains the Unaccompanied Minors Registry, a national repository for information about children who may be separated from their families as a result of a disaster. The first of its kind, the registry would allow entities such as law enforcement, medical facilities, shelter operators, and individuals to submit reports about children that are displaced due to a disaster. It was designed with a web-based portal that could allow for one-way communication from these external sources. Law enforcement and NCMEC staff could use the information submitted to the registry to reunite families by cross- referencing it against any information they may have collected through other means. According to FEMA officials and emergency managers, no disasters have required national child reunification support since Hurricane Katrina, where an estimated 5,000 children were reported to be separated from their families and guardians during the storm. As a result, FEMA has not called upon NCMEC to activate the National Emergency Child Locator Center. Nevertheless, FEMA continues to work with NCMEC and other partners on maintaining reunification resources. For example, in 2015 FEMA renewed an agreement it has with NCMEC to provide funding for deploying NCMEC personnel onsite following disasters. In addition, FEMA addressed child reunification during its 2016 Cascadia Rising training exercise, and FEMA participants assembled a child reunification task force. One challenge members of that task force identified during the exercise was related to communicating with children and parents with auditory or visual impairments. Since the enactment of the Post-Katrina Act, FEMA has worked with governmental and non-governmental entities more generally to address the needs of families with children who are affected by disasters. In 2007, FEMA joined other federal agencies and voluntary organizations, such as the Red Cross, in the establishment of the National Commission on Children in Disasters. Established by the Kids in Disasters Well-being, Safety, and Health Act of 2007, the group worked to identify gaps in policy for disaster assistance for children, and it issued reports in 2009 and 2010 that recommended changes to close those gaps, such as improving the capacity to provide child care services in the immediate aftermath of a disaster. In 2009, FEMA established a working group on children in disasters to engage representatives of multiple FEMA divisions in addressing the recommendations. The official leading the working group was appointed in 2015 to be the National Advisor on Children and Disasters. The responsibility of this advisor is to coordinate efforts and implement resources across FEMA and externally. FEMA and NCMEC also co-led the effort to issue a report in 2013 as guidance for post- disaster reunification. A primary goal of the report was to support reunification processes and procedures by identifying the roles and responsibilities of governments and voluntary organizations, among other stakeholders. It was designed as a guidance tool to assist these stakeholders in developing reunification plans that are inclusive, such as considering the needs of children who are nonverbal or who have disabilities. FEMA officials from two of the six disasters we reviewed described other assistance provided to children. Following the California wildfires, FEMA worked with school districts in the affected areas to bus students temporarily living outside their communities, in shelters or temporary housing, to their regular schools. Following the mudslide in Washington, the state provided similar assistance for students and teachers separated from their schools by the mudslide area by opening up an alternative access route. In addition, FEMA provided individual families with reimbursements for the extra cost of child care resulting from the extended commuting time required by the detour. Staff from another federal agency involved in disaster recovery, the Department of Health and Human Services’ Administration for Children and Families, initiated a task force designed to provide mental and emotional health resources for area children affected by the mudslide. For example, the organizations involved in the task force funded a full-time counselor at an area school. FEMA has taken steps since Hurricane Katrina to promote inclusive emergency management practices through its Office of Disability Integration and Coordination (ODIC). However, this progress may be hampered because FEMA lacks procedures to help ensure regions consistently involve ODIC in their disability integration activities and because of ODIC’s limited approach to delivering critical disability integration training. The lack of established procedures for involving ODIC in the disability integration activities of the regional staff has resulted in misunderstood roles, a lack of awareness about potentially underperforming staff, and inconsistent communication between the regions and headquarters. In addition, the limited training delivery methods ODIC uses for its key disability integration course, as well as ODIC’s lack of understanding about the pool of potential participants, may mean a substantial number of state and local emergency managers do not receive it. Both the use of regional disability integration staff and the training serve as important methods of communicating with state and local emergency managers to provide them with knowledge and tools to best support individuals with disabilities during a disaster. Without improvements in these areas, FEMA may be missing opportunities to help these emergency managers better meet the needs of individuals with disabilities affected by disasters. 1. To better ensure FEMA’s regional activities effectively support individuals with disabilities, the Secretary of Homeland Security should direct the FEMA Administrator to take steps to establish written procedures for how regions should involve the Office of Disability Integration and Coordination in clarifying disability integration staff’s roles, evaluating staff performance, and setting expectations for how staff communicate with headquarters and the regions. 2. To better position FEMA to expand access to key training on incorporating access and functional needs into emergency planning for state, local, and voluntary organization emergency management officials, the Secretary of Homeland Security should direct the FEMA Administrator to evaluate alternative cost-effective methods for delivering its course on access and functional needs, such as via virtual classes. 3. To help ensure its key training on incorporating access and functional needs into emergency planning reaches a sufficiently wide audience, the Secretary should direct the FEMA Administrator to collect information about the potential pool of participants, set general goals for the number of state and local emergency managers that will take this course, and implement the delivery methods needed to meet these goals. We provided a draft of this report to the Federal Emergency Management Agency (FEMA), within the Department of Homeland Security (DHS), for review and comment. Officials provided written comments, which are reproduced in appendix II and described below, as well as technical comments, which we incorporated in the report as appropriate. FEMA agreed with all three of our recommendations and outlined steps it plans to take to implement them. Regarding our first recommendation to establish written procedures to clarify the role of regional disability integration staff, officials noted FEMA’s plans to convene a working group to identify effective communication channels and foster collaborative relationships among ODIC, Regional Administrators, and regional disability integration staff. More specifically, FEMA expects the workgroup to build on the directive issued in December 2016 and establish written procedures on (1) the roles and responsibilities of regional disability integration staff; (2) developing performance goals to assess their performance; and (3) opening clearer communication channels among these staff, Regional Administrators, and disability integration staff in headquarters. We applaud FEMA’s effort to clarify the roles and responsibilities of regional disability integration staff in its recently issued directive. We believe that FEMA’s proposed plan to further encourage collaboration through the issuance of instructions for implementing the directive will help to better ensure support for and inclusion of individuals with disabilities affected by disasters across all FEMA regions. Regarding our second and third recommendations related to the delivery method and participants for FEMA’s key course on integrating access and functional needs into emergency planning, FEMA officials said they plan to explore additional delivery methods through various means, including through participant feedback. Additionally, they plan to develop a baseline of the number of emergency managers taking the course, including identifying specific numbers for the target audience by state and establishing a reasonable metric for the annual delivery of the course. FEMA officials stated that these metrics will also help inform FEMA leadership on the potential need to develop alternative delivery methods. We encourage FEMA to pursue these efforts and continue to believe that such efforts, if implemented, will provide a greater number of state and local emergency managers and others with the knowledge and tools to best support people with disabilities before, during, and after disasters. We are sending copies of this report to the appropriate congressional committees, the Secretary of Homeland Security, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or brownke@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. In addition to the contact named above, Sara Schibanoff Kelly (Assistant Director), Sara Pelton (Analyst-in-Charge), Aimee Elivert, and Lauren Friedman made key contributions to this report. Also contributing to this report were Susan Aschoff, Jessica Botsford, Sarah Cornetto, Lauren Gilbertson, Kristen Jones, Jean McSween, Mimi Nguyen, Debra Prescott, Almeta Spencer, and Erica Varner. Emergency Communications: Effectiveness of the Post-Katrina Interagency Coordination Group Could Be Enhanced. GAO-16-681. Washington, D.C.: July 14, 2016. Disaster Recovery: FEMA Needs to Assess Its Effectiveness in Implementing the National Disaster Recovery Framework. GAO-16-476. Washington, D.C.: May 26, 2016. Disaster Response: FEMA Has Made Progress Implementing Key Programs, but Opportunities for Improvement Exist. GAO-16-87. Washington, D.C.: February 5, 2016. Federal Emergency Management Agency: Strengthening Regional Coordination Could Enhance Preparedness Efforts. GAO-16-38. Washington, D.C.: February 4, 2016. Emergency Management: FEMA Has Made Progress since Hurricanes Katrina and Sandy, but Challenges Remain. GAO-16-90T. Washington, D.C.: October 22, 2015. Disaster Contracting: FEMA Needs to Cohesively Manage Its Workforce and Fully Address Post-Katrina Reforms. GAO-15-783. Washington, D.C.: September 29, 2015. Emergency Management: FEMA Collaborates Effectively with Logistics Partners but Could Strengthen Implementation of Its Capabilities Assessment Tool. GAO-15-781. Washington, D.C.: September 10, 2015. American Red Cross: Disaster Assistance Would Benefit from Oversight through Regular Federal Evaluation. GAO-15-565. Washington, D.C.: September 9, 2015. Emergency Preparedness: Opportunities Exist to Strengthen Interagency Assessments and Accountability for Closing Capability Gaps. GAO-15-20. Washington, D.C.: December 4, 2014. Actions Taken to Implement the Post-Katrina Emergency Management Reform Act of 2006, GAO-09-59R. Washington, D.C.: November 21, 2008. National Disaster Response: FEMA Should Take Action to Improve Capacity and Coordination between Government and Voluntary Sectors. GAO-08-369. Washington, D.C.: February 27, 2008. | In 2005, individuals with disabilities, individuals with limited English proficiency, and families with children were disproportionately affected by Hurricane Katrina. For example, some of those who had to abandon their wheelchairs could not evacuate because they were unable to wait in long lines for evacuation buses. The Post-Katrina Act required FEMA and other entities to take certain actions to assist these individuals, such as through the establishment of a Disability Coordinator within FEMA. GAO was asked to examine implementation of the Post-Katrina Act. This report assesses the extent to which FEMA and other entities provide disaster services to individuals with disabilities, individuals with limited English proficiency, and children in need of family reunification. GAO examined federal, state, and local disaster assistance efforts for six major disasters that occurred from March 2014 through October 2015, where federal response and recovery efforts included assistance to the three target groups and that varied in location and type of disaster. GAO interviewed relevant officials, visited three of the six sites, and analyzed emergency operations plans and disaster summary reports. The Federal Emergency Management Agency (FEMA) has taken steps to improve its disaster services for people with disabilities and its support to other entities, such as state and local governments. FEMA established the Office of Disability Integration and Coordination (ODIC) following enactment of the Post-Katrina Emergency Management Reform Act of 2006 (Post-Katrina Act) to lead the agency's efforts to promote inclusiveness in disaster planning, response, and recovery. However, there is no established procedure for FEMA Regional Administrators, who oversee disability integration staff in the regions, to involve ODIC in the activities of these staff. As a result, regions vary in the extent to which they consult with ODIC, which has led to a lack of clarity in regional disability integration staff roles, a lack of awareness of potentially underperforming staff, and inconsistent communication between the regions and headquarters. Federal internal control standards state that organizational structures should allow the organization's components to communicate information necessary to fulfill their respective responsibilities. Communication gaps between ODIC and the regions may prevent regional disability integration staff from effectively supporting state and local governments in meeting the needs of individuals with disabilities affected by disasters. ODIC also has not established goals for how many state and local emergency managers should take its key training on integrating the needs of individuals with disabilities into disaster planning. Nor has ODIC evaluated alternative methods to deliver the training more broadly, such as virtually in addition to classroom training. As a result, state and local emergency managers may be ill-prepared to provide effective disaster services to those with disabilities. FEMA and other entities assist individuals with limited English proficiency by translating information on disaster assistance programs. FEMA provides information about its assistance programs using print materials in other languages, bilingual staff, and a helpline with translators for more than 50 languages. State, local, and voluntary organizations also disseminate information on health and safety information, such as evacuations and sheltering: In five of the six disasters GAO reviewed where translation was needed, these entities reported using a range of services, from bilingual staff to multilingual helplines. FEMA worked with the National Center for Missing and Exploited Children (NCMEC) to establish a national call center designed to field calls with information about children separated from their families during disasters. NCMEC also maintains a registry that serves as a web-based repository created to collect this information. However, according to FEMA officials, no disasters since Hurricane Katrina have required national child reunification support. Nevertheless, FEMA continues to work with NCMEC on maintaining reunification resources, such as by funding the deployment of NCMEC personnel following disasters. FEMA should establish written procedures for involving ODIC in regional activities; set goals for the number of state and local emergency managers who will take a key training on disability integration; and evaluate alternative delivery methods for the training. FEMA concurred with all of the recommendations. |
The Domestic Preparedness Program is aimed at enhancing domestic preparedness to respond to and manage the consequences of potential terrorist WMD incidents. The authorizing legislation designated DOD as lead agency, and participating agencies include FEMA, the Federal Bureau of Investigation (FBI), the Health and Human Services’ Public Health Service, the Department of Energy, and the Environmental Protection Agency. The Army’s Chemical and Biological Defense Command designed a “train-the-trainer” program to build on the existing knowledge and capabilities of local first responders—fire, law enforcement, and medical personnel and hazardous materials technicians—who would deal with a WMD incident during the first hours. The legislation also designated funds for the Public Health Service to establish Metropolitan Medical Strike Teams to help improve cities’ medical response to a WMD incident. Other aspects of the program included systems to provide information and advice to state and local officials and a chemical/biological rapid response team. DOD received $36 million in fiscal year 1997 to implement its part of the program, and the Public Health Service received an additional $6.6 million. DOD’s fiscal year 1998 and 1999 budgets estimate that $43 million and $50 million, respectively, will be needed to continue the program. DOD expects the last 2 years of the 5-year program to cost about $14 million to $15 million each year, and continuing an exercise program for 2 more years could add another $10 million. Thus, the total projected program cost for the DOD segment could exceed $167 million. This does not include the costs of the Public Health Service, which hopes to establish and equip (an average of $350,000 of equipment and pharmaceuticals per city) Metropolitan Medical Strike Teams in all 120 program cities. In addition to the $6.6 million that the Public Health Service initially received, it spent $3.6 million in fiscal year 1997 to expand the number of strike teams. The Public Health Service received no additional funding in fiscal year 1998, but it estimates program requirements at $85 million for the remaining 93 cities. Domestic Preparedness Program training gives first responders a greater awareness of how to deal with WMD terrorist incidents. Local officials in the seven cities we visited praised the training program content, instructors, and materials as well as DOD’s willingness to modify it based on suggestions from local officials. They also credited the program with bringing local, state, and federal regional emergency response agencies together into a closer working relationship. By December 31, 1998, DOD expects to have trained about one-third of the 120 cities it selected for the program. All training is to be complete in 2001. The first responders trained are expected to train other emergency responders through follow-on courses. The cities we visited were planning to institutionalize various adaptations of the WMD training, primarily in their fire and law enforcement training academies. A related field exercise program to allow cities to test their response capabilities also has begun. DOD decided to select cities based on core city population. It also decided to select 120 cities, which equates to all U.S. cities with a population of over 144,000 according to the 1990 census. The 120 cities represent about 22 percent of the U.S. population and cover at least 1 city in 38 states and the District of Columbia. Twelve states and the U.S. territories have no cities in the program, and 25 percent of the cities are in California and Texas. DOD took a city approach because it wanted to deal with a single governmental entity that could select the most appropriate personnel for training and receive equipment. In selecting the cities DOD did not take into account a city’s level of preparedness or financial need. There was also no analysis to evaluate the extent to which the cities selected for the program were at risk of a terrorist attack warranting an increased level of preparedness, or whether a smaller city with high risk factors might have been excluded from the program due to its lower population. In fact, in none of the seven cities we visited did the FBI determine there was a credible threat of a WMD attack, which would be one factor considered in a threat and risk assessment. In our April 1998 report, we cited several public and private sector entities that use or recommend threat and risk assessment processes to establish requirements and target investments for reducing risk. Although we recognize there are challenges to doing threat and risk assessments of program cities, we believe that difficulties can be overcome through federal-city collaboration and that these assessments would provide a tool for making decisions about a prudent level of investment to reduce risks. In implementing the Domestic Preparedness Program, DOD could leverage state emergency management structures, mutual aid agreements among local jurisdictions, or other collaborative arrangements for emergency response. By delivering the program to cities based on population size, DOD is replicating training in nearby cities that might be part of the same response system or mutual aid area. Because of such mutual aid agreements and response districts or regions—as well as traditional state roles in both training and the established federal response system—a more consolidated approach could have resulted in fewer training iterations. Training in fewer locations while taking advantage of existing emergency response structures could hasten the accomplishment of program goals and reinforce local response integration. Such an approach also could cover a greater percentage of the population and make effective use of existing emergency management training venues. Under this approach, WMD training would be delivered over the long term through existing state training systems. As shown in appendix I, DOD’s city approach resulted in clusters of nearby cities, each of which is to receive training and equipment. Our analysis shows that 14 clusters of 44 different cities, or 37 percent of the total number of the cities selected for the program, are within 30 miles of at least one other program city. Southern California is a key example of the clustering effect where training efficiencies could be gained. Appendix II shows California’s mutual aid regions. Consistent with the statewide standardized emergency management system involving countywide operational areas within 6 mutual aid regions, the Los Angeles County sheriff is in charge of the consolidated interagency response to an incident occurring in any of the county’s 88 local jurisdictions and 136 unincorporated areas. These include Los Angeles, Long Beach, and Glendale, all of which are treated separately in the program. Further, the nearby cities of Anaheim, Huntington Beach, Santa Ana, San Bernardino, and Riverside are within 30 miles of at least one other program city and also are treated separately. Through mutual aid and under California’s statewide system, Los Angeles county conceivably could assist or be assisted by these other neighboring program cities or any other jurisdictions in the state in the event of a major incident. Similarly, as shown in appendix III, Virginia has 13 regional hazardous materials teams to respond to a WMD incident. Through these regional teams operating under state control, four adjacent program cities—Norfolk, Virginia Beach, Newport News, and Chesapeake—would assist one another along with Portsmouth and Hampton, which are not program cities. Texas has four program cities less than 30 miles from each other: Dallas, Fort Worth, Irving, and Arlington. In yet another example, the Washington, D.C., metropolitan area established a Metropolitan Medical Strike Team with a council-of-governments approach involving six jurisdictions in Virginia, Maryland, and the District of Columbia—these jurisdictions would support each other in the event of a WMD incident. DOD treats Washington, D.C., and Arlington, Virginia, separately for the training and equipment segments of the program. Similar strike teams in other cities are designed to be integrated into the local emergency response and medical systems for that particular area. In response to comments by state and local officials, DOD began holding regional meetings to introduce the program. Nevertheless, each program city still receives its own training and equipment package. Cities may invite representatives from neighboring jurisdictions and state agencies, but classroom space is limited, and if the neighboring city is a program city, it will eventually receive its own on-site training. DOD could have used state structures to deliver its training. Some states have academies and institutes to train first responders and emergency managers. For example, California’s Specialized Training Institute provides emergency management training to first responders statewide. In Texas, the Division of Emergency Management conducts training for local first responders, and fire protection training is provided through the Texas Engineering Extension Service. Under current circumstances, the individual cities whose personnel were trained as trainers are to ensure that the appropriate courses are delivered to rank-and-file emergency response personnel. Cities we visited were adapting the DOD courses differently and using different venues to deliver the training. Cities planned to deliver portions of the courses both directly and through their local academies. One delivery method that DOD could consider to reach large numbers of first responders while minimizing travel costs is distance learning. The U.S. Army Medical Research Institute of Infectious Diseases, for example, has used distance learning techniques through satellite-to-television links. The legislation authorizes DOD to lend rather than give or grant training equipment to each city. The loan agreement between DOD and the cities specifies that the loan is for 5 years and that the cities are to repair, maintain, and replace the equipment. The loan agreement terms have caused frustration and confusion among local officials. Some cities we visited viewed the acceptance of the equipment as tantamount to an unfunded federal mandate because DOD is providing no funds to sustain the equipment. At least two cities were reluctant to accept the equipment unless DOD would provide assurances that they could use it operationally and would not be asked to return it. Although such assurances conflict with the loan agreement terms, DOD officials acknowledged that cities could keep the equipment and use it operationally if necessary. DOD officials also pointed out that much of the equipment has no more than a 5-year useful life and is largely incompatible with standard military-specification equipment. Further, expectations have been raised among some local officials that the federal government may eventually provide funds to sustain the program and to provide even more equipment to meet cities’ perceived operational requirements. DOD officials said that the equipment was intended only to support cities’ training needs. Also, DOD wanted to encourage cities to share the burden of preparing for WMD terrorism by funding additional equipment needs themselves. However, no assessments have been undertaken as part of the Domestic Preparedness Program to help define equipment requirements for WMD over and above what is needed for an industrial hazardous materials incident response. Although the FBI and the intelligence community see growing interest in WMD by groups and individuals of concern, the intelligence community concluded that conventional weapons will continue to be the most likely form of terrorist attack over the next decade. Such threat information would be a factor in a threat or risk assessment process that could be used as a tool for determining equipment requirements. The Congress intended the Domestic Preparedness Program to be an interagency effort with DOD as lead agency. Under FEMA leadership, the Senior Interagency Coordination Group provided a forum for DOD and the other involved agencies to share information. However, in developing the program, some member agency officials stated that DOD did not always take advantage of the experience of agencies that were more accustomed to dealing with state and local officials and more knowledgeable of domestic emergency response structures. For example, some agency representatives said that they offered suggestions such as taking a metropolitan area approach and coordinating with state emergency management agencies instead of dealing directly and only with cities. DOD officials noted that because the group often did not react to DOD proposals or could not achieve consensus on issues, DOD moved forward with the program without consensus when necessary. According to participants, the group did influence two decisions. DOD initially planned to cover 20 cities in the first phase of the program, but the group raised the number to 27 so that 7 cities would be trained sooner than their population would otherwise warrant. The seven cities were raised in priority to account for geographical balance, special events, and distance from the continental United States. Also, concerned about DOD’s methodology and cities’ presumed negative perceptions, the group recommended that DOD abandon its plan to have cities conduct a formal self-assessment of their capabilities and needs. But the group did not press for an alternative assessment methodology, which resulted in the lack of any analytical basis for cities to determine their requirements for a prudent and affordable level of preparedness for WMD (a desired end state) or to guide DOD or the cities in defining individual cities’ requirements or needs. The Senior Interagency Coordination Group did not resolve the issue of similar or potentially overlapping terrorism-related courses. A joint Department of Justice and FEMA 2-day basic concepts course on emergency response to terrorism was being developed at about the same time as the Domestic Preparedness Program, and FEMA teaches subjects applicable to WMD and terrorism in its Emergency Management Institute and the National Fire Academy. The Department of Justice and FEMA courses and the DOD courses were developed separately. Some local officials viewed the growing number of WMD consequence management training programs, including the Domestic Preparedness Program, the Department of Justice and FEMA courses, FEMA Emergency Management Institute courses, National Fire Academy courses, and the National Guard’s National Interagency Counterdrug Institute course, as evidence of a fragmented and possibly wasteful federal approach toward combating terrorism. Similarly, multiple programs with equipment segments—such as the separate DOD and Public Health Service programs and the new Department of Justice equipment grant program are causing frustration and confusion at the local level and are resulting in further complaints that the federal government is unfocused and has no coordinated plan or defined end state for domestic preparedness. Both equipment portions of the program, which were designed and implemented separately, cover personal protection, decontamination, and detection equipment. The separation of the $300,000 worth of DOD equipment and the average $350,000 Public Health Service equipment and pharmaceuticals required local officials to deal with two federal agencies’ requirements and procedures. It also required local officials to develop separate equipment lists and to ensure compatibility and interoperability of the equipment, optimize the available federal funding, and avoid unnecessary duplication. A truly joint, coordinated equipment program could have alleviated the administrative burden on city officials and lowered the level of confusion and frustration. Although the Public Health Service circulated cities’ proposed equipment lists among the Domestic Preparedness interagency partners for comments, this coordination at the federal level did little to simplify the process for the cities. State and local officials and some national fire fighter organizations also raised concerns about the growing number of response elements being formed, including the new initiative to train and equip National Guard units. These officials did not believe specialized National Guard units would be of use because they could not be on site in the initial hours of an incident and because numerous support units within the military and other federal agencies already can provide backup assistance to local authorities as requested. Examples of existing support capabilities include the Army’s Technical Escort Unit, the Marine Corps’ Chemical Biological Incident Response Force, and the Public Health Services’ National Medical Response Teams. State and local officials were more supportive of the traditional National Guard role to provide requested disaster support through the state governor. We are currently reviewing the proposed role of the National Guard and reserves in WMD consequence management. As noted in our December 1997 report and in our April 1998 testimony,the many and increasing number of participants, programs, and activities in the counterterrorism area across the federal departments, agencies, and offices pose a difficult management and coordination challenge to avoid program duplication, fragmentation, and gaps. We believe that the National Security Council’s National Coordinator for Security, Infrastructure Protection, and Counter-Terrorism, established in May 1998 by Presidential Decision Directive 62, should review and guide the growing federal training, equipment, and response programs and activities. Just as the broadening scope of efforts to combat terrorism poses a serious challenge for the executive branch, it also can be a coordination and oversight challenge for the Congress. The current committee structure is aligned with an agency and functional focus for authorization, appropriations, and oversight, and multiagency crosscutting issues, such as combating terrorism, proliferation, and others, fall within the jurisdiction of many authorizing committees and appropriations subcommittees. Mr. Chairman, that concludes our prepared statement. We will continue to finalize our report, receive agency comments, and develop recommendations on program focus, and will be issuing that report in the next few weeks. We would be happy to answer any questions at this time. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO discussed its work and observations on the Nunn-Lugar-Domenici Domestic Preparedness Program and related issues, focusing on: (1) program objectives and costs; (2) the training the Department of Defense (DOD) is providing to local emergency response personnel; (3) issues GAO identified on the way the program is structured and designed; (4) the equipment segment of DOD's program; and (5) interagency coordination of this and other related programs. GAO noted that: (1) the Domestic Preparedness Program is aimed at enhancing domestic preparedness to respond and manage the consequences of potential terrorist weapons of mass destruction (WMD) incidents; (2) the authorizing legislation designated DOD as lead agency, and participating agencies include the Federal Emergency Management Agency (FEMA), the Federal Bureau of Investigation, the Public Health Service, the Department of Energy, and the Environmental Protection Agency; (3) DOD received $36 million in fiscal year (FY) 1997 to implement its part of the program, and the Public Health Service received an additional $6.6 million; (4) DOD's FY 1998 and 1999 budgets estimate that $43 million and $50 million will be needed to continue the program; (5) Domestic Preparedness Program training gives first responders a greater awareness of how to deal with WMD terrorist incidents; (6) by December 31, 1998, DOD expects to have trained about one-third of the 120 cities it selected for the program; (7) all training is to be complete in 2001; (8) DOD decided to select cities based on core city population, and it did not take into account a city's level of preparedness or financial need; (9) in implementing the Domestic Preparedness Program, DOD could leverage state emergency management structures, mutual aid agreements among local jurisdictions, or other collaborative arrangements for emergency response; (10) the legislation authorizes DOD to lend rather than give or grant training equipment to each city; (11) some cities GAO visited viewed the acceptance of the equipment as tantamount to an unfunded federal mandate because DOD is providing no funds to sustain the equipment; (12) in developing the program, some member agency officials stated that DOD did not always take advantage of the experience of agencies that were more accustomed to dealing with state and local officials and more knowledgeable of domestic emergency response structures; (13) the many and increasing number of participants, programs, and activities in the counterterrorism area across the federal departments, agencies, and offices pose a difficult management and coordination challenge to avoid program duplication, fragmentation, and gaps; and (14) GAO believes that the National Security Council's National Coordinator for Security, Infrastructure Protection and Counter-Terrorism, established in May 1998 by Presidential Decision Directive 62, should review and guide the growing federal training, equipment, and response programs and activities. |
Congress established the Highway Trust Fund in 1956 to hold highway user taxes that fund various surface transportation programs. The primary revenue sources for the Highway Trust Fund are federal excise taxes on motor fuels (gasoline, diesel, and special fuels taxes) and truck-related taxes (truck and trailer sales, truck tire, and heavy-vehicle use taxes). Fuel taxes provide about 89 percent of the excise tax income to the Highway Trust fund. The Highway Trust Fund is divided into two separate accounts—Highway and Mass Transit Accounts. The Highway Account receives the majority (approximately 89 percent in fiscal year 2013) of the tax receipts allocated to the Highway Trust fund, including the majority of the fuel taxes. All truck-related taxes are also deposited into the Highway Account. The Highway Trust Fund primarily supports surface transportation programs administered by four DOT operating administrations—FHWA, FTA, FMCSA, and NHTSA. The Highway Account funds programs administered by FHWA, FMCSA, and NHTSA, and the Mass Transit Account funds FTA programs. For fiscal year 2013, MAP-21 authorized approximately $51 billion from the Highway Trust Fund to four DOT operating administrations, most of which was authorized to FHWA (see fig. 1). These agencies provide much of this funding directly to states, metropolitan-planning organizations, and transit agencies through formula and discretionary grants, and recipients select projects to be funded, subject to federal eligibility requirements. FHWA is primarily funded from the Highway Trust Fund.percent of FHWA’s authorized funds are for the federal-aid highway program. FHWA oversees this program and distributes much of this funding to states through annual apportionments established by statutory formulas. Apportioned funds are available for states to obligate for construction, reconstruction, and improvement of highways and bridges on eligible federal-aid highway routes, as well as for other purposes authorized in law. While FHWA oversees and distributes funds to States, the responsibility for selecting specific highway projects generally rests with state DOTs and local-planning organizations, which have discretion in determining how to allocate available federal funds among various projects. Specifically, FHWA relies on its division offices—52 division offices located in each state, the District of Columbia, and Puerto Rico—to oversee projects funded through the federal-aid highway program and ensure these projects comply with federal requirements. However, FHWA is accountable for ensuring that the federal-aid highway program is delivered effectively, efficiently, and in compliance with established federal law. The remainder of FHWA’s Highway Trust Fund funding (about 10 percent) is authorized for other programs including the Federal Lands Highway Program, which provides financial and engineering assistance for a network of public roads that serve the transportation needs of Federal and Indian Lands and FHWA’s research and education activities. FHWA allocates Highway Trust funds to 28 agencies, such as the Department of Interior. See appendix II for additional information about FHWA’s federal-aid highway and other spending. FTA receives funding from both the Mass Transit Account of the Highway Trust Fund for its Formula and Bus Grants and from the General Fund for its discretionary grants, which include the Capital Investment Grants programs. FTA distributes this funding to grant recipients for several activities, including financial and technical assistance to local and state public agencies to purchase, build, maintain, and operate transportation systems, and to support planning and operations for public transit systems, including bus, subway, and light rail. FTA works in partnership with states and other grant recipients to administer federal transit programs, provide financial assistance, policy direction, technical expertise, and some oversight. State and local governments are ultimately responsible for executing most federal transit programs by matching and distributing federal funds and by planning, selecting, and supervising infrastructure projects and safety programs in accordance with federal requirements. NHTSA receives funding from both the Highway Trust Fund and the General Fund. It administers and distributes Highway Trust Fund monies by formula to states through various federal highway-safety grants, such as the State and Community Highway Safety Grant Program, which is a formula grant. This funding supports programs that work to reduce accidents from speeding, encourage the proper use of seat belts and child seats, reduce accidents from driving while intoxicated, prevent and reduce accidents between motor vehicles and motorcycles, and improve law enforcement services in motor- vehicle accident prevention and traffic supervision, among other things. NHTSA also coordinates through federal-state partnerships, regulates and issues safety standards for passenger vehicles, and addresses compliance issues with safety standards by performing tests, inspections, and investigations. FMCSA receives funding for programs from the Highway Trust Fund. FMCSA is charged with establishing and enforcing standards for motor carrier vehicles and operations, hazardous materials, and the movement of household goods, among other things. FMCSA also conducts compliance reviews of motor carriers’ operations at their places of business as well as roadside inspections of drivers and vehicles, and can assess a variety of penalties, including fines and orders for noncompliant motor carriers to cease interstate operations. The largest of the federal motor carrier safety-grant programs funded from the Highway Trust Fund—the Motor Carrier Safety Assistance Program Grants—provides funding to states to reduce crashes, fatalities, and injuries related to commercial motor vehicle transportation. MAP-21 also authorized Highway Trust Fund funds to be used for agency’s administrative expenses. Currently, FHWA, FMCSA, and NHTSA all receive funding from the Highway Trust Fund for administrative expenses. FTA does not currently receive funding from the Highway Trust Fund for administrative expenses but requested such funding for fiscal year 2015. FHWA collects and reports information on activities funded with obligations from the Highway Trust Fund. FHWA tracks this data for individual projects segments or contracts, but does not collect and report aggregate spending data at the project level for the majority of projects on a routine basis. FTA also collects data on the activities that are funded with Highway Trust Fund obligations each year, while NHTSA and FMCSA collect and report data on obligations from the Highway Trust Fund by grant programs. FHWA, NHTSA, and FMCSA also receive funding from the Highway Trust Fund for administrative expenses. Information on administrative obligations is available in annual budget requests. In fiscal year 2013, FHWA obligated about $41 billion from the Highway Trust Fund, most of which (about $39 billion) was apportioned to states for activities to improve the nation’s roadway and bridge infrastructure through the federal-aid highway program. Our analysis of fiscal year 2013 federal-aid highway program obligations shows that states obligated most of this funding for road and bridge improvements (47 percent for States roads in addition to 17 percent for bridges) (see fig. 2 below).obligated about 20 percent of Highway Trust Fund monies for project development activities including planning, engineering, and acquiring rights-of-way. Additionally, 9 percent was obligated for safety, enhancements and other improvements, including about 1 percent for sidewalks and bicycle trails. We further analyzed road and bridge improvements and found that about 90 percent of obligations went toward reconstruction, resurfacing, and rehabilitation activities, while about 10 percent went toward new construction. Additional information about road and bridge improvements and funding by program are available in appendix II. While FHWA collects information in FMIS on the type of activities funded with Highway Trust Fund monies, it does not currently collect and report aggregate spending data at the project level for the majority of projects on a routine basis.individual project segments or contracts, but not for an entire project. For example, as shown in figure 3, a new highway project generally has four stages: (1) planning, (2) preliminary design and environmental review, (3) final design and right-of-way acquisition and (4) construction. Each stage can include multiple project segments or contracts over many years with distinct obligations at any given time, and FHWA does not currently link all project segments or contracts associated with an entire project in FMIS. Although FHWA is able to collect and report federal obligations by individual contract, it is not able to aggregate this information to collect and report total federal obligations for an entire project. FTA collects and reports information on activities funded with obligations from the Highway Trust Fund. FTA, through its grant management system—the Transportation Electronic Award and Management system, reports information on federal obligations in its annual statistical summaries, and makes this information publicly available on its website. The statistical summaries provide information about federal funds obligated each fiscal year for each of FTA’s grant programs by categories of activities, such as obligations for “bus purchases” or “fixed guideway modernization” (such as rail purchases). FTA obligated approximately $9 billion in fiscal year 2013 from the Mass Transit Account of the Highway Trust Fund through its Formula and Bus Grants programs. These grant programs provided funding for a range of activities, such as to modernize existing rail systems, to increase access to transportation in rural areas, and to restore, replace and acquire buses and other equipment. As shown in table 1, FTA obligated about $3 billion for activities to modernize or improve existing fixed guideway systems, which include among other things, purchases and rehabilitation of rail equipment, and station enhancements. FTA also obligated about $1.5 billion and $2 billion, respectively, for bus purchases and other bus activities in fiscal year 2013. FMCSA collects and reports obligations from the Highway Trust Fund at the grant program level, in its grant management data system, Grant Solutions, and Delphi, DOT’s accounting system. As shown in table 2, FMCSA obligated about $298 million in fiscal year 2013, primarily through grants to states to improve commercial motor carrier vehicle safety, border enforcement, and vehicle license and information systems programs. The largest of FMCSA’s grant programs, the Motor Carrier Safety Assistance Program (MCSAP), provided $213 million (about 71 percent of FMCSA’s total federal obligations) to states to help develop or implement programs to reduce commercial motor vehicle-involved accidents, fatalities, and injuries, through safety activities such as inspections.enforcement ($32 million) and vehicle licensing and registration ($29 million) grant programs, as well as funding for investments in data programs such as the Performance and Registration Information System Management Grant Program ($5 million), among others. FMCSA collects other information about its grant programs as part of its oversight and monitoring responsibilities in other information systems. For example, FMCSA collects data on activities that were funded with MCSAP grants in its Analysis and Information Online Data Dashboard. Within DOT, FHWA, FMCSA, and NHTSA receive Highway Trust Fund monies for administrative expenses such as personnel salaries and benefits and rent. There is no standard definition within DOT of what is considered an administrative expense. According to DOT officials, MAP- 21 and appropriations language establish parameters for the types of activities that can be funded with Highway Trust Fund monies. For example, for FMCSA, administrative funds can be used for, among other things, personnel costs, administrative infrastructure, rent, information technology, programs for research, and such other expenses as may from time to time become necessary to implement statutory mandates of the Administration not funded from other sources. As shown in table 4, in fiscal year 2013, DOT agencies obligated a total of $752 million for administrative expenses funded from the Highway Trust Fund. This accounted for about 2 percent of Highway Trust Fund obligations by DOT’s operating administrations. MAP-21 also authorized some of FHWA’s Highway Trust Fund administrative expense funding to be used for specific programs including the Highway Use Tax Evasion Projects program and On-The-Job Training Support Services program. In fiscal year 2013, FHWA obligated $22 million for these programs. DOT agencies report information on administrative expenses paid from the Highway Trust Fund in their annual budget requests, which are publicly available. DOT operating administrations have limited flexibility with respect to how Highway Trust Fund monies can be used for administrative expenses and limited flexibility for reallocating full-time equivalents (FTE) funded from the Highway Trust Fund. For example, NHTSA allocates FTEs by program, and no individual NHTSA employee is paid out of both the Highway Trust Fund and the general fund. NHTSA does have some flexibility to fund FTEs from either the Highway Trust Fund or the General Fund with allocations made at the fund level based on the type of work assigned to the employee. Motor fuel taxes that support the Highway Trust Fund are eroding, resulting in fewer resources to fund surface transportation projects and requiring, in recent years, infusions of funding from general revenues. We have reported that continuing to fund the Highway Trust Fund through general revenues may not be sustainable given competing demands and the federal government’s fiscal challenges, and that Congress and the Administration need to agree on a long-term plan for funding surface transportation. Given this situation, ensuring that Highway Trust Fund dollars are spent wisely and that its uses are transparent to Congress and the public is important. About $39 billion of FHWA’s fiscal year 2013 Highway Trust Fund total obligations of $41 billion were distributed to states through the federal-aid highway program and FHWA is accountable for the efficient and effective use of these funds. In recent years, FHWA has taken some positive steps to collect and report aggregate spending data for its “major” projects, but does not currently collect and report aggregate spending data for other projects, which represented nearly 88 percent of all fiscal year 2013 federal-aid highway obligations. FHWA could collect and report aggregate spending data for all projects in FMIS since the database already has two existing data fields that could be used to collect this information. According to FHWA officials, collecting this data in FMIS could result in some increased costs to states; however, FHWA does not have an estimate of what the associated costs for tracking this information would be to states. FHWA would have to collect further information on the costs and resources required to make these data fields mandatory and would need to develop data collection procedures to ensure state users are entering and reporting consistent data. FHWA is currently in the process of modernizing its FMIS database system, which could provide FHWA with an opportunity to explore options for further refining FMIS to collect consistent aggregate spending data for all projects or other options for collecting this information. Improving FMIS to allow states to provide project-level data could aid FHWA in its risk-based oversight of federal- aid highway programs by allowing FHWA to more easily draw consistent data for its compliance assessment reviews. In addition, collecting project-level data could assist FHWA in tracking and reporting information to Congress and the public about how the majority of federal funds from the Highway Trust Fund are being used. To improve transparency and provide Congress and the public greater visibility into the types of highway activities funded with Highway Trust Fund monies, we recommend that the Secretary of Transportation direct the FHWA Administrator to explore the costs, feasibility, and options for collecting and publicly reporting consistent aggregate project-level spending data. We provided a draft of this report to DOT for its review and comment. DOT agreed with our recommendation and provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees and the Secretary of Transportation. In addition, this report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or Flemings@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. The objective of this report was to examine what is known about the types of projects, activities, and federal administrative functions and expenses supported by the Department of Transportation (DOT) using Highway Trust Fund monies in fiscal year 2013. To identify the types of projects, activities, and administrative expenses that have been undertaken using Highway Trust Fund monies in fiscal year 2013, we obtained and analyzed data from the Federal Highway Administration (FHWA), the Federal Transit Administration (FTA), the National Highway Traffic Safety Administration (NHTSA), and the Federal Motor Carrier Safety Administration (FMCSA) of fiscal year 2013 obligations for all programs and administrative expenses funded with Highway Trust Fund monies. According to DOT, these are the DOT agencies that can directly obligate funds from the Highway Trust Fund. For FHWA data, we obtained and analyzed data from FHWA’s Fiscal Management Information System (FMIS) system, which is FHWA’s major financial information system for tracking highway projects financed with federal-aid highway program funding. FHWA uses the information entered in FMIS for planning and executing program activities, evaluating program performance, and depicting financial trends and requirements relating to current and future funding. Specifically, we obtained FHWA data for projects with obligated funds in fiscal year 2013 (October 1, 2012, through September 30, 2013). For the purposes of our analysis, we included obligations for all 50 states, District of Columbia, and Puerto Rico, and excluded U.S. territories: American Samoa, Guam, Northern Mariana Islands, and the Virgin Islands from this total. The data set that FHWA provided us, included data for the following data fields: (1) State, (2) Project Number, (3) Fund Source, (4) Recode, (5) Improvement Type (6) Fiscal year 2013 Federal Funds, and (7) Major Project. We analyzed the data to determine the total obligated federal funds for 59 improvement We categorized these improvement types into 13 broader GAO types.categories and determined total federal obligations incurred for each of these categories. (See table 5). About $3.9 million of the entries in the FMIS database did not have an improvement type classification and for the purposes of our analysis, we classified these entries as ‘other’. FHWA also provided us with fiscal year 2013 Highway Trust Fund obligations data for programs and administrative expenses not captured in FMIS. We also reviewed documents from and conducted interviews with FHWA officials to gather information about: (1) the capabilities of FHWA’s FMIS database to track project-level data, (2) processes and protocols for tracking and entering project-level data in FMIS, and (3) extent to which FHWA uses FMIS data for project management and risk oversight purposes. To assess the reliability of data collected in FMIS, we reviewed available documentation and interviewed FHWA officials on the procedures used by FHWA and state Departments of Transportation (state DOT) to enter and verify financial information entered into FMIS. We also conducted electronic testing for duplicate entries and missing values in the data we extracted from FMIS. We found the FMIS data elements we used in our report to be sufficiently reliable for the purposes of reporting federal obligations for fiscal year 2013. For FTA we requested, obtained, and analyzed FTA data on Highway Trust Fund obligations for all programs in fiscal year 2013. FTA produced these data from its Transportation Electronic Award and Management (TEAM) system and provided us with total federal obligations by program and by 13 categories of activities that GAO reclassified into 9 categories of activities. We did not obtain administrative obligations data from FTA because FTA does not receive funding from the Highway Trust Fund for administrative expenses. Similarly, we requested, obtained, and analyzed NTHSA and FMCSA data on Highway Trust Fund obligations for all programs and administrative expenses during fiscal year 2013. Both agencies produced this data through Delphi, which is DOT’s accounting system. We also reviewed publicly available information on FTA, NHTSA, and FMCSA fiscal year 2013 program and administrative obligations from the Highway Trust Fund. We interviewed officials from FTA, NHTSA, FMCSA, and obtained written information about steps taken to ensure the reliability of their data from TEAM and from the Delphi system. We determined that the data were sufficiently reliable for the purposes of this report. We reviewed relevant statutes, regulations, legislation, and other literature including prior GAO reports on Highway Trust Fund authorizations and the types of administrative expenses that can be funded with Highway Trust Fund dollars. We interviewed officials from the DOT, Office of Secretary of Transportation (OST) and the four DOT- operating administrations to obtain additional information on federal obligations from the Highway Trust Fund; the administration’s process for tracking and monitoring these obligations; and the flexibility DOT and its operating administrations have to reallocate Highway Trust Fund dollars among offices and other functions. There is no standard definition of administrative expenses. However, for the purposes of our review, administrative expenses were defined as salaries, benefits, travel, and other service contracts. We conducted this performance audit from March 2014 to October 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In fiscal year 2013, FHWA spent $41 billion from the Highway Trust Fund. Of this amount, $39 billion was apportioned to states through the federal- aid highway program. FHWA also spent about $2 billion of its $41 billion from the Highway Trust Fund for other purposes, including transfers to other DOT and non-DOT federal agencies ($610 million), Federal Lands Transportation Program ($558 million), and research activities ($438 million), among others. In fiscal year 2013, FHWA allocated Highway Trust monies to 28 agencies (8 DOT and 20 non-DOT agencies), including DOT’s Federal Railroad Administration and the Department of Interior, among others. Our analysis of FHWA FMIS data of fiscal year 2013 obligations showed that about 47 percent of total federal obligations went to road improvements, and 17 percent went to bridge improvements. The majority of both road and bridge projects went toward reconstruction, resurfacing, and rehabilitation activities versus new construction. Specifically: For roads, about 43 percent was dedicated to resurfacing and rehabilitation of roads with most of the remainder going toward reconstruction with increased or no added capacity. For bridges, about 59 percent was dedicated for bridge rehabilitation and replacement with no added capacity and most of the remainder went toward rehabilitation and replacement with increased capacity and other bridge improvements. (See figs. 4 and 5). In addition, about 29 percent of FHWA federal-aid highway program obligations from the Highway Trust Fund went toward project development, safety, and other improvements. Project development activities include funding for activities such as preliminary engineering activities, construction engineering, research, and other planning activities. Our analysis of this data showed that most of this funding went toward construction engineering (32 percent) and preliminary engineering (29 percent). (See fig. 6.) Safety, enhancements, and other improvements include safety, safety education for pedestrians and bicyclists, and highway crossing activities, among others. Our analysis of this data showed that most of this funding went to safety activities (67 percent). An additional 15 percent went toward facilities for pedestrians and bicycles. (See fig. 7) We also analyzed FHWA FMIS data of projects funded under each of FHWA’s five core formula programs. These core programs account for $31 billion (about 76 percent) of the $41 billion that FHWA obligated from the Highway Trust Fund in fiscal year 2013. As discussed below, the types of activities funded under each of these programs varied widely. (See figs. 8-12). The National Highway Performance Program (about $17 billion) provides funding for improvements on the National Highway System such as construction, reconstruction, resurfacing, and rehabilitation of National Highway System segments. The majority of this funding was obligated for resurfacing and rehabilitation of roads (21 percent), project development activities (19 percent), and reconstruction of roads to increase capacity (18 percent). The Surface Transportation Program (approximately $11 billion) funds the federal share of projects that states and localities may carry out on federal-aid highway , including bridge projects, transit capital projects, and bus facilities. The majority of STP funding was obligated for resurfacing and rehabilitation of roads (26 percent), project development activities (20 percent), and reconstruction of roads with no added capacity (11 percent). The Highway Safety Improvement Program (approximately $2 billion) provides funding for activities that reduce the number of crashes, traffic fatalities, and serious injuries on public roads. Most of this funding was obligated for safety improvements (62 percent) and project development activities (20 percent). The Congestion Mitigation and Air Quality Improvement Program (about $1 billion) provides funding to state and local governments for transportation projects and programs to help meet the requirements of the Clean Air Act. About 21 percent of this funding was obligated to project development activities, and 24 percent was obligated to other activities, which included activities such as traffic management on high-occupancy vehicle lanes. The Transportation Alternatives program (approximately $111 million) provides funding for alternative transportation projects related to surface transportation, such as pedestrian and bicycle trails, community improvement activities, construction, planning, and design of infrastructure related projects and systems that provide safe routes for non-drivers. About 50 percent of this funding was obligated for sidewalks and bicycle trail activities, with most of the remainder obligated for project development (17 percent) and other (18 percent) activities. Our analysis of fiscal year 2013 federal-aid highway program obligations for “major” projects showed that over 40 percent of these funds were used for road resurfacing and rehabilitation (24 percent) and project development activities (19 percent). (See fig. 13.) About 6 percent of the funds were used for new construction of roads and bridges, while 7 percent were utilized for safety improvements. In addition to the contact named above, Steve Cohen, Assistant Director, Melissa Bodeau, Melinda Cordero, Tara Jayant, Mitchell Karpman, Leslie Locke, Maria Mercado, Sara Ann Moessbauer, Ruben Montes de Oca, and Crystal Wesco made key contributions to this report. | In recent years, dedicated revenues to the Highway Trust Fund have been eroding, resulting in fewer resources to fund surface transportation projects and requiring, between 2008 and 2014, transfers of over $50 billion in general revenues. Four operating administrations within DOT—FHWA, FTA, NHTSA, and FMCSA—receive funding from the Highway Trust Fund for programs administered by these agencies, and FHWA receives the largest share (81 percent of the agency's authorizations in fiscal year 2013). GAO was asked to review how Highway Trust Fund monies are being used to help ensure that sound choices and investment decisions about future funding are made. This report examines what is known about the types of projects, activities, and federal administrative functions and expenses supported by DOT using Highway Trust Fund monies in fiscal year 2013. To address this request, GAO obtained and analyzed fiscal year 2013 federal obligation data from DOT's operating administrations, reviewed relevant documentation, and interviewed FHWA, FTA, FMCSA, NHTSA, and DOT officials. In fiscal year 2013, operating administrations within the Department of Transportation (DOT) collected and reported some information on the types of activities and administrative expenses funded from the Highway Trust Fund, but did so with varying levels of detail. The Federal Highway Administration (FHWA) obligated about $41 billion in fiscal year 2013, most of which ($39 billion) was apportioned to states through the federal-aid highway program. FHWA tracks federal-aid highway program obligations in its Fiscal Management Information System (FMIS), for individual project segments or contracts. This process allows FHWA to collect and report information on the types of activities (such as obligations for the construction of new roads or bridges) funded with Highway Trust Fund monies. However, FHWA does not collect and report aggregate project-level data for the majority of projects on a routine basis. Aggregate project-level data would allow FHWA to track and report the total overall obligations of an entire project. While FHWA tracks and reports aggregate obligations for its “major projects” (projects with a total cost of $500 million or more), it does not collect and report aggregate obligations for other projects, which represented nearly 88 percent of all fiscal year 2013 spending. FHWA could collect and report aggregate obligations for all projects in FMIS, and FMIS has two existing data fields that could be used to collect this information. But according to FHWA officials, doing so would result in increased costs to FHWA and states. FHWA officials attributed increased costs to, among other things, programming costs to make changes to FMIS to track these data; however, FHWA has not completed a cost analysis to estimate what the associated costs would be. FHWA is currently in the process of modernizing its FMIS database system. Exploring the costs, feasibility, and options for collecting and reporting consistent aggregate project-level obligations could aid FHWA in its oversight efforts, including its ability to more easily draw consistent data for its compliance reviews and to report information to Congress and the public about how the majority of federal funds from the Highway Trust Fund are being used. The Federal Transit Administration (FTA), the National Highway Traffic Safety Administration (NHTSA), and the Federal Motor Carriers Safety Administration (FMCSA) also collect some information on activities funded with Highway Trust Fund monies. For example, FTA collects data on activities, such as obligations for bus and rail purchases, funded with Highway Trust Fund monies each year, and NHTSA and FMCSA collect and report data by grant program. In addition, within DOT, the FHWA, FMCSA, and NHTSA used Highway Trust Fund monies for a range of administrative expenses, such as personnel salaries and benefits and rent. FTA does not receive Highway Trust Fund monies for administrative expenses. GAO recommends that the Secretary of Transportation direct the FHWA Administrator to explore the costs, feasibility, and options for collecting and publicly reporting consistent aggregate project-level spending data. DOT agreed with our recommendation. DOT also provided technical comments, which we incorporated, as appropriate. |
More than 10 years ago, Congress directed FAA to conceptualize and plan NextGen; FAA is now implementing key NextGen systems and capabilities. NextGen was envisioned as a major redesign of the air transportation system to increase efficiency, enhance safety, and reduce flight delays that would entail precision satellite navigation and surveillance; digital, networked communications; an integrated weather system; and more. Figure 1 provides examples of changes and benefits that are expected to come from NextGen. The transition to NextGen—which encompasses multiple programs, procedures, and systems at different levels of maturity—is a complex, incremental, multi-year process. Since 2006, we have monitored FAA’s development of NextGen and identified a number of key challenges facing the agency’s implementation efforts. The 2012 Act included several provisions that address some of the issues that we have identified in our work, including incentivizing aircraft operators to equip with NextGen technologies, developing performance measures, and involving stakeholders in NextGen development.progress in implementing NextGen has highlighted ongoing challenges in Our recent work on FAA’s three areas: improving leadership, demonstrating near-term benefits, and balancing the needs of the current system while implementing NextGen systems. Our work has found that complex organizational transformations, such as NextGen, involving technology, systems, and retraining key personnel require substantial leadership commitment over a sustained period, and that leaders must be empowered to make critical decisions and held accountable for results. Transitions, inconsistent leadership, and unclear roles and responsibilities can weaken the effectiveness of the internal and external collaboration required for successful implementation. Both the magnitude of the multi-year transition, as well as the numerous efforts under way throughout the different offices and divisions in FAA to effectuate that transition, will require FAA’s leaders to manage all aspects of NextGen in a strategic, timely, and coordinated fashion. FAA has struggled to have the leadership in place to manage and oversee NextGen implementation, but more recently, has begun to fill key positions. In June 2013, FAA appointed a new Deputy Administrator and designated a Chief NextGen Officer, in response to Section 204 of the 2012 Act. In addition, in September 2013, FAA appointed a new Assistant Administrator for NextGen—a position that had previously been vacant. Designating one leader—such as the Deputy Administrator’s responsibility over NextGen—can be beneficial because it centralizes accountability and can speed decision-making. With these positions now filled, FAA should be in a better position to resolve its NextGen leadership challenges. However, as I have stated in other work, a number of offices oversee certain aspects of NextGen, not all of which report to the Assistant Administrator, and implementation will require successful collaboration between these offices. As these positions have only recently been filled, it is not yet clear how effective the changes resulting from the 2012 Act will be in achieving that collaboration. Another key development in NextGen management envisioned by Section 208 of the 2012 Act redesignated the Director of the Joint Planning and Development Office (JPDO) as an Associate Administrator reporting directly to the FAA Administrator and defined that administrator’s responsibility for coordination and planning with FAA’s partner agencies. This change has not been fully implemented by FAA. However, in the Consolidated Appropriations Act of 2014, Congress eliminated direct funding of JPDO, and subsumed JPDO in FAA’s operations budget. At this point, it remains unclear whether a JPDO Director position will continue and, if not, how the roles and responsibilities of that office, particularly with respect to long-term planning and coordination of research and development efforts across partner agencies, will be redistributed within FAA. We will continue to monitor these issues in two studies requested by this committee—one examining the organizational and leadership structure of the NextGen effort, and one looking more in-depth at actions FAA has taken to streamline its organization. We have begun both of these examinations. To convince operators to make investments in NextGen equipment, FAA must continue to deliver systems, procedures, and capabilities that demonstrate near-term benefits and a return on an operator’s investments. In particular, a large percentage of the current U.S. air carrier fleet is equipped to fly more precise performance-based navigation (PBN) procedures, such as following precise routes that use the Global Positioning System or glide descent paths, which can save airlines and other aircraft operators money through reduced fuel burn and flight time. However, aircraft operators have expressed concerns that FAA has been slow to produce new procedures for various reasons, and has not produced the most useful or beneficial PBN routes and procedures. The 2012 Act included a number of provisions aimed at accelerating the creation of PBN procedures. For example, Section 213 of the 2012 Act directed FAA to develop plans to identify beneficial PBN procedures and to prioritize their implementation at key airports. We reported in April of 2013 that FAA had made progress in focusing its PBN efforts at seven priority metroplexes with airport operations that have a large effect on the overall efficiency of the NAS. More recently, FAA reports that it is considering recommendations from the NextGen Advisory Committee (NAC) regarding revalidation of the criteria used to prioritize these metroplexes, and that recent efforts have been diverted to metroplexes where the deployment of the new En Route Automation Management (ERAM) system is complete, in order not to interfere with ERAM deployment at those locations where it is ongoing. Our work also found that FAA does not have a system for tracking the use of existing PBN procedures. As a result, FAA is unable to assure that investment in these routes is worthwhile or that they justify the cost to develop and maintain them. Further, in the absence of data on the use of existing PBN routes, airlines and other stakeholders remain unconvinced that the investments needed for the full implementation of NextGen will be justified. Such data could help the agency demonstrate the value of PBN technologies and any resulting benefits, as well as allow the agency to identify routes that need to be revised to increase their use. We made recommendations to FAA to develop a system to track the use of PBN procedures and a process to proactively identify new PBN procedures based on NextGen goals and targets. We will continue to monitor FAA’s progress in implementing these recommendations. The 2012 Act also included two other key provisions to accelerate the creation of PBN procedures. The first was a categorical exclusion from environmental review for PBN procedures that if implemented could demonstrate measurable reductions in fuel consumption, carbon dioxide emissions, and noise, on a per-flight basis, as compared to aircraft operations that follow existing procedures. However, our April 2013 report found that, according to FAA, potential noise impacts are measured cumulatively for all flights and that FAA has not yet identified an approach for per-flight assessments. FAA officials stated that no currently available methodology resolves the technical problems involved in making such a determination, so the agency has not applied this new categorical exclusion. Second, the 2012 Act called for the agency to establish a program for qualified third parties to develop, test, and maintain flight procedures. FAA has made some progress in this area by awarding a $2.8-million contract to GE’s Naverus and a partner to develop two PBN procedures each at five mid-sized airports. The contractors are to design, evaluate, and maintain these procedures and be responsible for providing environmental data and analysis to FAA to support categorical exclusions and for drafting any required National Environmental Policy Act reviews, for review and approval by FAA. As of January 2014, PBN procedures had been implemented at two of the five selected airports. NextGen represents a transition from existing ATC systems and facilities to new systems, potentially necessitating changes to or consolidation of existing facilities. We have reported over the years that various investment and policy decisions, including what existing ATC systems and facilities will remain in the NAS during the transition and for how long, have yet to be made. For the systems and facilities that remain, FAA will have to monitor and maintain their performance and condition while the agency implements the NextGen transition. Decisions about the number of existing systems and facilities that will remain in operation during the transition have implications for FAA’s capital and operations budgets going forward. If aging systems and associated facilities are not retired, FAA will miss potential opportunities to reduce its overall maintenance costs at a time when resources needed to maintain both systems and facilities may become scarcer. The 2012 Act contained a number of provisions aimed at accelerating the implementation of NextGen systems. However, we found in August 2013 that FAA’s budget planning does not fully account for the potential impact of NextGen systems that will be deployed and the need for continued operations and maintenance of existing systems and facilities. In the 2012 Act, Congress also expressed concern regarding the condition of FAA facilities and mandated that we study their condition. In our September 2013 report, we noted that FAA estimates its staffed facilities like towers and Terminal Radar Approach Control (TRACON) facilities have about $260 million in deferred maintenance; unstaffed facilities, such as shelters and communication towers that house and support NAS equipment, had an estimated $446 million in deferred maintenance in 2012. These, and other cost estimates for maintaining existing systems and facilities, along with implementing NextGen exceed anticipated funding levels. However, we concluded that FAA’s imprecise facility-condition data do not facilitate agency-wide priority assessments, which, in turn, could hinder the agency’s ability to target its limited resources on those projects in greatest need of repair and that are most critical to the NAS. In addition, section 804 of the 2012 Act directed FAA to complete a study on the consolidation and realignment of FAA services and facilities to support the transition to NextGen. However, FAA has yet to identify which facilities would be consolidated or realigned, and according to FAA officials, the study will continue through 2014. In our August 2013 report we recommended improvements to FAA’s budget-planning and infrastructure-condition data, improvements that FAA is currently considering. Improved budget planning and accurate and reliable data on infrastructure condition could help Congress better understand the funding requirements of existing systems and facilities and facilitate FAA’s efforts to support the agency’s mission of continuing to safely operate the NAS along with the longer-term goal of transitioning to NextGen. We will continue to monitor FAA’s progress in implementing these recommendations. The U.S. air transportation system remains one of the safest in the world. As part of FAA’s efforts to maintain and improve the safety of the system, FAA issues certificates and approvals for new air operators, new aircraft, and aircraft parts and equipment, and grants approvals for changes to air operations and aircraft based on FAA’s interpretation of federal standards (see fig. 2). These certificates and approvals indicate that such things as new aircraft, the design and production of aircraft parts and equipment, and new air operators are safe for use in the NAS. However, our previous work has highlighted FAA’s inconsistent regulatory interpretation of certification standards. In 2010, we found that variation in FAA’s interpretation of standards for certification and approval decisions was a long-standing issue and made recommendations to improve those processes. Subsequently, the 2012 Act required FAA to work with industry to assess the certification process, including reviewing our previous work and developing recommendations to address the concerns that we and others have raised. As required by Section 312 of the 2012 Act, FAA, in consultation with representatives of the aviation industry, made recommendations to the director of FAA’s Aircraft Certification Service regarding streamlining and reengineering the certification process. These recommendations, which we found to be relevant, clear, and actionable, called for FAA to: 1. improve the effectiveness of its delegation programs, 2. update certification procedures to reflect a systems approach to 3. review operational safety and rulemaking processes, and 4. implement efficiency reforms, among others. In July 2013, FAA released its plan to implement these recommendations. The plan included 14 initiatives and programs that FAA either had under way or intended to start to improve efficiency and reduce costs related to certifications. We found these initiatives were generally relevant to the recommendations and were clear and measurable. However, we found that FAA’s plans do not contain some of the elements essential to a performance measurement process. For example, FAA has developed milestones for each initiative and deployed a tracking system to monitor the implementation of all certification-related initiatives, but it has not yet developed performance measures to track the success of most of the initiatives and programs. According to an FAA official, the agency has started discussions with industry stakeholders to identify key goals related to performance measurement. Because industries’ goals and FAA’s goals may be different with respect to the certification process, developing meaningful performance measures is a complex task that the agency plans to continue in 2014. The Committee recently asked us to examine in more detail FAA’s progress and any challenges experienced in implementing the recommendations and making improvements to its certification processes, and will be tracking FAA’s efforts going forward. Also resulting from issues found in our 2010 report on certification, section 313 of the Act directed FAA to establish an advisory panel to address inconsistencies in the interpretation of regulations by the certification offices. Consistent with issues raised in our 2010 report, this committee identified three root causes of inconsistent interpretation of regulations: (1) unclear regulatory requirements; (2) inadequate and nonstandard FAA and industry training in developing regulations, applying standards, and resolving disputes; and (3) a culture that includes a general reluctance by both industry and FAA to work issues of inconsistent regulatory application through to a final resolution and a “fear of retribution.” To address these root causes, the committee made six recommendations, including developing a master source of guidance and developing instructions for FAA staff with policy development responsibility. We found that the advisory committee took a reasonable approach in identifying the root causes and that its recommendations were relevant, actionable, and clear. The committee also considered the feasibility of the recommendations by identifying modifications to existing efforts and programs and prioritizing the recommendations. FAA reported in January 2014 that it was still determining the feasibility of implementing these recommendations. The agency told us that it expected to publish an action plan to address the recommendations and metrics to measure implementation by late June 2014, more than six months after FAA’s initial target. We note that while measuring implementation may be useful, FAA is not intending to measure outcomes, a measurement that could help in understanding if an action is having the intended effect. UAS are aircraft and associated equipment that do not carry a pilot aboard, but instead operate on pre-programmed routes or are manually controlled by pilot-operated ground stations. Although current non- military, domestic uses of UAS are limited to activities such as law enforcement, forensic photography, border security, and scientific data collection, UAS have a wide range of other potential commercial uses— including vehicular traffic monitoring, crop dusting, and pipeline inspections—and the market for UAS use is expected to grow. Concerned with the pace of integrating UAS into the NAS, Congress established specific requirements and set deadlines for FAA in the 2012 Act. FAA has several efforts under way to satisfy the 2012 Act’s requirements, most of which must be achieved by December 2015. In January 2013 we reported that of the seven deadlines that had passed, FAA had completed two items. However, since that time, FAA has satisfied a number of additional milestones (see app. III for an update of all the 2012 Act’s requirements with respect to UAS). Of particular note: JPDO and FAA released a UAS Comprehensive Plan and a UAS Roadmap, respectively, in November 2013 to outline the nation’s UAS goals and objectives and the tasks necessary to achieve UAS integration. In late December 2013, FAA selected the six locations for its UAS test site program. FAA established permanent Arctic areas where small UAS can operate for research and commercial purposes and the first flight took place in the fall of 2013. While progress has been made implementing some of the key milestones established in the 2012 Act, integrating UAS into the NAS continues to challenge FAA leading to uncertainty about when UAS integration will be achieved. For example, while FAA announced the six locations for its UAS test site program, FAA has not yet defined what operational, safety, and performance data it needs from the test sites and how that data will be collected and analyzed. We previously reported that use of these data would be important in developing safety, reliability, and performance standards, which are needed to guide and validate the supporting research and development efforts. FAA and industry stakeholders have stated that data and other information generated by the test sites will be important in helping FAA answer key research questions related to UAS operations and developing regulations and operational procedures for future commercial and civil use of UAS. Finally, to increase collaboration and provide stable organizational leadership, and focus on UAS integration efforts, FAA created the UAS Integration Office in 2013. While the office did not have an operations budget, as of January 2014, the office has 33 full time employees, and FAA is still finalizing agreements and other arrangements related to the reorganization, and it remains unclear what resources the office will have available to fulfill its role. Moving forward, FAA has a number of important milestones it must meet to ensure UAS integration into the NAS. A key next step, according to FAA officials and industry stakeholders, will be to adopt a final rule for small UAS operations. Although FAA has had efforts under way since 2008 supporting a rulemaking on small UAS, it is unlikely that FAA will meet the August 2014 final rule deadline required by the 2012 Act. For example, FAA has not yet issued a Notice of Proposed Rulemaking for small UAS, and recently estimated that one will not be released until November 2014. Further, FAA must develop standards—and determine what data are necessary to inform that process—to facilitate safe UAS integration into the NAS. More broadly, to achieve UAS integration, FAA faces the challenge of ensure that all of the various efforts supporting these integration issues within its own agency, as well as across federal agencies and other entities, align and converge in a timely fashion. We have begun additional work on UAS that will be looking specifically at collaboration between federal agencies responsible for UAS integration into the NAS and the research and development priorities in the area of research and development to support UAS integration. In closing, FAA has made some progress in implementing various parts of the 2012 Act, and is seeking to address some of the key challenges it faces. Going forward, we will continue to monitor FAA’s progress, highlight the key challenges that remain, and the steps FAA and industry can take to find a way forward on the issues covered in this statement as well as other issues facing the industry. For example as previously mentioned, we have work underway to examine organizational and leadership issues with NextGen, and to examine, in greater detail, FAA’s certification processes and progress made with respect to UAS. In addition, for this Committee we will be examining issues related to funding airport development, including passenger facility charges, airport improvement program grants, and the potential for greater private sector investment through public-private partnerships. Chairman LoBiondo, Ranking Member Larsen, and Members of the Subcommittee, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. NextGen Air Transportation System: FAA Has Made Some Progress in Midterm Implementation, but Ongoing Challenges Limit Expected Benefits. GAO-13-264. Washington. D.C.: April 8, 2013. Next Generation Air Transportation System: FAA Faces Implementation Challenges. GAO-12-1011T. Washington, D.C.: September 12, 2012. Air Traffic Control Modernization: Management Challenges Associated with Program Costs and Schedules Could Hinder NextGen Implementation. GAO-12-223. Washington, D.C.: February 16, 2012. Next Generation Air Transportation System: FAA Has Made Some Progress in Implementation, but Delays Threaten to Impact Costs and Benefits. GAO-12-141T. Washington, D.C.: October 5, 2011. Integration of Current Implementation Efforts with Long-term Planning for the Next Generation Air Transportation System. GAO-11-132R. Washington, D.C.: Nov. 22, 2010. NextGen Air Transportation System: FAA’s Metrics Can Be Used to Report on Status of Individual Programs, but Not of Overall NextGen Implementation or Outcomes. GAO-10-629. Washington, D.C.: July 27, 2010. Aviation Safety, Status of Recommendations to Improve FAA’s Certification and Approval Processes. GAO-14-142T. Washington, D.C.: Oct. 30, 2013. Aviation Safety: FAA Efforts Have Improved Safety, but Challenges Remain in Key Areas. GAO-13-442T. Washington, D.C.: April 16. 2013. Aviation Safety: Additional FAA Efforts Could Enhance Safety Risk Management. GAO-12-898. Washington, D.C.: September 12, 2012. Aviation Safety: Certification and Approval Processes Are Generally Viewed as Working Well, but Better Evaluative Information Needed to Improve Efficiency. GAO-11-14. Washington, D.C.: October 7, 2010. Unmanned Aircraft Systems: Continued Coordination, Operational Data, and Performance Standards Needed to Guide Research and Development. GAO-13-346T. Washington, D.C.: February 15, 2013. Unmanned Aircraft Systems: Measuring Progress and Addressing Potential Privacy Concerns Would Facilitate Integration into the National Airspace System. GAO-12-981. Washington, D.C.: Sept. 14, 2012. A “mishandled baggage” report is a report filed with a carrier by or on behalf of a passenger who claims loss, delay, damage, or pilferage of baggage. A mishandled- baggage report may represent one or more mishandled bags. handling processes; however, data limitations impeded further analysis. We described DOT’s options for and the impact of implementing minimum compensation standards for delayed baggage, which included (1) keeping current regulations, which, among other things, require compensation for reasonable expenses that result because of delay in the delivery of baggage; (2) reimbursing passengers for the checked baggage fee if the bag is delayed; and (3) implementing compensation standards based on the length of delay. Air Traffic Collegiate Training Initiative (CTI). We found that the cost- effectiveness of the CTI schools depends on a number of cost elements that are currently unknown, including the upfront cost of developing new curriculums and how FAA implements training through the CTI schools, among other factors. In addition, we were not able to determine the potential effect of the alternative air–traffic– controller–training approach through CTI schools on controller trainees; the concept would need further development before comparisons can be made about performance outcomes for such trainees under the current approach through the FAA Academy and the alternative approach through the CTI schools. FAA facility condition. While FAA has mechanisms to identify and mitigate safety deficiencies at FAA facilities and has taken actions to strengthen its capital planning process to help ensure its facilities are in good condition, our analysis of FAA’s statistical model for estimating the condition of uninspected terminal facilities found the model to be imprecise; it uses one variable—age of the facility—to estimate the facility’s condition. Furthermore, inaccuracies in FAA’s real–estate management database undermine its usefulness as a management tool to manage its real estate portfolio. We recommended that FAA take action to improve the precision of the methods it uses to estimate the conditions of uninspected terminal facilities and implement a plan to improve its database for tracking its inventory of real property assets, consistent with sound data– collection practices. National Mediation Board. We found that the National Mediation Board, which facilitates labor relations and oversees union elections in two key transportation sectors—railroads and airlines—through mediation and arbitration of labor disputes and overseeing union elections, has adapted to challenges presented by large union elections resulting from airline mergers and has implemented improvements such as online voting. However, the board lacks some controls in key management areas that could risk its resources and its success such as having a formal mechanism for tracking resolution of findings and recommendations. We made a number of recommendations to improve the board’s planning and make the most effective use of its limited resources and also noted that Congress should consider authorizing an appropriate federal agency’s Office of Inspector General to provide additional oversight. Airport-intercity passenger rail connectivity. Most major U.S. airports have some degree of physical proximity to intercity passenger rail stations; however, air-rail connectivity remains limited due to a variety We found that connectivity between these two modes may of factors. provide a range of mobility, economic, and environmental benefits, and while strategies exist to improve connectivity, the costs and trade- offs of enhancing connectivity could be substantial. GAO, Intermodal Transportation: A Variety of Factors Influence Airport-Intercity Passenger Rail Connectivity, GAO-13-691 (Washington, D.C.: Aug. 2, 2013). Appendix III: Status of Requirements for UAS Integration under the 2012 Act as of January 2014 Deadline 05/14/2012 Enter into agreements with appropriate government agencies to simplify the FAA Modernization and Reform Act of 2012 requirement process for issuing Certificates of waiver or authorizations (COAs) or waivers for public UAS. Status of action In process–memorandum of agreement (MOA) with the Department of Defense (DOD) signed September of 2013; MOA with Department of Justice (DOJ) signed March 2013; MOA with National Aeronautics and Space Administration (NASA) in final coordination; MOA with Department of Interior (DOI) and National Oceanic and Atmospheric Administration (NOAA) are still in draft. 08/12/2012 Establish a program to integrate UAS into the national airspace at six test ranges. This program is to terminate 5 years after date of enactment. 08/12/2012 Develop an Arctic UAS operation plan and initiate a process to work with relevant federal agencies and national and international communities to designate permanent areas in the Arctic where small unmanned aircraft may operate 24 hours per day for research and commercial purposes. 08/12/2012 Determine whether certain UAS can fly safely in the national airspace before the completion of the Act’s requirements for a comprehensive plan and rulemaking to safely accelerate the integration of civil UAS into the national airspace or the Act’s requirement for issuance of guidance regarding the operation of public UAS including operating a UAS with a COA or waiver. 11/10/2012 Expedite the issuance of a COA for public safety entities 11/10/2012 Develop a comprehensive plan to safely accelerate integration of civil UAS into Issue guidance regarding operation of civil UAS to expedite COA process; provide a collaborative process with public agencies to allow an incremental expansion of access into the national airspace as technology matures and the necessary safety analysis and data become available and until standards are completed and technology issues are resolved; facilitate capability of public entities to develop and use test ranges; provide guidance on public entities’ responsibility for operation. 02/12/2013 Make operational at least one project at a test range. 02/14/2013 Approve and make publically available a 5-year roadmap for the introduction of civil UAS into national airspace, to be updated annually. 02/14/2013 Submit to Congress a copy of the comprehensive plan. 08/14/2014 Publish in the Federal Register the Final Rule on small UAS. 08/14/2014 Publish in the Federal Register a Notice of Proposed Rulemaking to implement recommendations of the comprehensive plan. 08/14/2014 Publish in the Federal Register an update to the Administration’s policy statement on UAS in Docket No. FAA-2006-25714. 09/30/2015 Achieve safe integration of civil UAS into the national airspace. Status of action None to date of the comprehensive plan. 12/31/2015 Develop and implement operational and certification requirements for public UAS 02/14/2017 Report to Congress on the test ranges. For further information on this testimony, please contact Gerald L. Dillingham, Ph.D., at (202) 512-2834 or dillinghamg@gao.gov. In addition, contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include Andrew Von Ah, Assistant Director; Mike Armes, Martha Chow; Geoff Hamilton; Dave Hooper; Daniel Hoy; Eric Hudson; Bert Japikse; Heather Krause, Sara Ann Moessbauer; Faye Morrison; Nalylee Padilla; Melissa Swearingen; and Jessica Wintfeld. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The U.S. air transportation system is the busiest and among the safest in the world. Even so, maintaining and improving the extraordinary level of connectivity and mobility the system affords us, and the safety record that has been achieved to date requires continued attention and effort. In the 2012 Act, Congress directed FAA to take various actions to improve the safety and efficiency of the current NAS while transitioning to NextGen. In addition, given the potential and opportunities afforded by new UAS technologies, the 2012 Act included several provisions with respect to FAA safely integrating UAS into the NAS. Based on work GAO has conducted for this Committee since the passage of the 2012 Act, this testimony discusses FAA's challenges and progress in 1) implementing NextGen, 2) improving aviation safety, and 3) integrating UAS into the national airspace system. This statement is drawn from several GAO reports completed since the 2012 Act, as well as additional reports from prior to the 2012 Act on these topics. To update information in those reports, GAO conducted interviews with officials from FAA and industry, and reviewed agency documents. The FAA Modernization and Reform Act of 2012 (the 2012 Act) contained several provisions related to implementing the Next Generation Air Transportation System (NextGen)—a complex, long-term initiative to incrementally modernize and transform the national airspace system (NAS). GAO's recent work on NextGen has highlighted three key implementation issues: Improving NextGen Leadership: Complex transformations, such as NextGen, require substantial leadership commitment over a sustained period, and leaders must both be empowered to make critical decisions and be held accountable for results. The 2012 Act created a Chief NextGen Officer that FAA appointed in June 2013, and FAA has recently filled other key NextGen leadership positions. With these positions filled, FAA should be in a better position to resolve its NextGen leadership challenges. Demonstrating Near-Term Benefits: The 2012 Act included a number of provisions aimed at accelerating the creation of performance-based navigation (PBN) procedures, such as following precise routes that use the Global Positioning System, which can save airlines and other aircraft operators money through reduced fuel burn and flight time. FAA must continue to deliver PBN capabilities and begin to demonstrate a return on operator's investments. As of January 2014, FAA has implemented PBN procedures at two of the five airports selected for early deployment. Balancing the Needs of the Current Air–Traffic Control System and NextGen: While the 2012 Act contained a number of provisions aimed at accelerating NextGen implementation, GAO found that FAA's budget planning does not fully account for the impact on the agency's operating costs of the NextGen systems that will be deployed in future years, along with the need for continued operation and maintenance of existing systems and facilities. Cost estimates for maintaining existing systems and facilities coupled with implementing NextGen exceed anticipated funding levels. GAO recommended improvements to FAA's budget–planning and infrastructure-condition data, which FAA is working to implement. Safety in the aviation industry is achieved in part through adherence to various certification standards. The 2012 Act required FAA to work with industry to assess the certification process. GAO's work has found that while FAA has made progress developing its plan to implement these recommendations, FAA continues to lack performance measures to track its progress. For unmanned aircraft systems (UAS), FAA has implemented 7 of the 17 requirements established in the 2012 Act, representing progress since GAO's last update in January 2013. However, FAA continues to experience challenges implementing the provisions in the 2012 Act and integrating UAS into the NAS. For example, although FAA has had efforts under way since 2008 supporting a rulemaking on small UAS, it is unlikely that FAA will meet the August 2014 final rule deadline required by the 2012 Act since it has not yet issued a Notice of Proposed Rulemaking. In addition, while FAA created the UAS Integration Office in 2013 to lead UAS integration, as of January 2014, the program lacks an operations budget. |
Industrywide regulation of the U.S. airline industry began in 1938 in response to congressional concern over safety, airlines’ financial health, and perceived inequities between airlines and other regulated forms of transportation. The Civil Aeronautics Act of 1938 (P.L. 706) applied to interstate operations of U.S. airlines and gave the Civil Aeronautics Authority, redesignated as the Civil Aeronautics Board (CAB) in 1940, authority to regulate which airlines operated on each route and what fares they could charge. Airlines could not add or abandon routes or change fares without CAB approval. CAB also limited the number of airlines in the industry. In 1938, the interstate U.S. airline industry consisted of 16 “trunk” airlines, but this number contracted to 10 by 1974, despite 79 applications from new airlines to initiate service. Competition was limited on a route to one airline unless the CAB determined that demand was sufficient to support an additional airline. Airfares were based on a complex cost-based formula used by the CAB, though the exact formulas and process varied over the life of the CAB. Generally, though, airlines during this time had little incentive to reduce costs, since each was assured a fixed rate of return. As a result, the competition that existed among airlines was largely based on the quality of service. Airlines operated largely a point-to-point system, more similar to railroads than the airline networks that we know today. For example, as shown in figure 1, the route-maps of Eastern Airlines (1948) and Western Airlines (1962) show a system vastly different from today’s hub-and-spoke networks. Airlines have traditionally relied on union labor, and labor relations have been covered by the Railway Labor Act since 1936. The union bargaining structure that developed within the airline industry has been highly decentralized and separated by craft (e.g., pilots, mechanics, etc.). Before deregulation, unions and airline management engaged in carrier-by-carrier bargaining whereby the last contract signed by one carrier generally served as the starting point for the next airline (known as “pattern bargaining”). During regulation, labor relations were generally good because CAB’s fare-setting allowed airlines to pass increased labor costs on to passengers. Airlines’ bargaining power was enhanced by the Mutual Aid Pact, a strike insurance plan created in 1958, through which a struck airline was compensated by nonstruck airlines based on increases in traffic the latter received during a strike. The Mutual Aid Pact was eliminated with deregulation, thereby enhancing airline labor’s power in contract negotiations. The Airline Deregulation Act phased out federal control over airline pricing and routes. Airline deregulation was premised on an expectation that an unregulated industry would attract entry and increase competition among airlines, thereby benefiting consumers with lower fares and improved service. The experience of unregulated (i.e., state-regulated) intrastate service in Texas and California provided support for this expectation. Moreover, prior to deregulation, industry analysts—on the basis of conventional economic reasoning—expected that opportunities for increased competition would increase the number of airlines operating in many markets, thereby lowering fares and expanding service. The Airline Deregulation Act established specific goals of encouraging competition by attracting new entrant airlines and allowing existing airlines to expand. According to the act, competition was expected to lower fares and expand service, the chief aims of deregulation. At the same time, Congress recognized that deregulation could lead to economic dislocations for some communities and workers as service patterns adjusted and airlines entered and exited markets and the industry overall. As a result, the EAS program and the EPP were established. The EAS program was put into place to guarantee that small communities served by commercial airlines before deregulation would maintain a minimal level of scheduled air service. DOT currently subsidizes commuter airlines to serve approximately 150 rural communities across the country that otherwise would not receive any scheduled air service. According to DOT, EAS subsidizes 39 communities in Alaska and 115 more in the rest of the United States. The EAS budget ranged from about $100 million early in the program down to about $25 million, before rising in recent years to $100 million. In Fiscal Year 2006, EAS was funded at $109 million. EPP was created, first, to compensate airline workers who lost their jobs or received lower pay as a result of bankruptcies or major contractions whose major cause was airline deregulation and, second, to grant such workers first-hire rights. However, the Department of Labor delayed the establishment of regulations to administer these rights, Congress did not appropriate funds to compensate displaced employees, and airlines fought the requirements in court. On August 7, 1998, the statute authorizing the EPP was repealed. No compensation was ever provided to displaced employees, and the first-hire right was never enforced. While the practice of setting of airline entry and rates was deregulated, the federal government is still involved in many facets of the airline industry, including many aspects that affect the economics of the industry. For example, the federal government still influences financing and investment decisions affecting the nation’s aviation infrastructure, including airports and air navigation systems. In addition to the various taxes and user fees on commercial airline tickets, which averaged 15.5 percent of the base fare in 2002, the federal government also provides support from its general fund for FAA operations. In 2007, the Airport and Airways Trust Fund, which finances the nation’s aviation infrastructure, will be up for renewal. The federal government also provided commercial airlines with $7.4 billion in financial assistance and $1.6 billion in loan guarantees for six airlines as a result of the September 11, 2001, terrorist attacks. Finally, PBGC has assumed almost $12 billion in net airline pension obligations since 1991. The airline industry has undergone significant change since the late 1970s. Air travel, and along with it industry revenues and expenses, have tripled since 1978. However, industry profits have become increasingly cyclic, with the most recent downturn leading to almost $28 billion in operating losses since 2001. Airline employee compensation grew following deregulation, even though many studies have found that employees earned a premium under regulation. Nevertheless, employee compensation as a share of total expenses has declined, especially in recent years. During regulation, airlines operated almost as regulated monopolies, encountering little competition and facing little pressure to restrain costs because fares were based on the airlines’ costs plus a fixed rate of return. Following deregulation, legacy airlines were able to stave off new entrant competition through various operating barriers, such as FAA-imposed take-off and landing times at congested airports (slot controls), perimeter rules at Washington Reagan National Airport, and airlines’ exclusive-use control of gate leases; and business practices, such as frequent flyer programs and ticket distribution systems. The market downturn that began in 2000 exposed legacy airlines’ precarious financial condition, allowing low-cost airlines the opportunity to compete more aggressively. Owing to financial instability since deregulation, airlines operating in bankruptcy have become more common, but we found that bankruptcy protection has not adversely affected nonbankrupt airlines. More troubling has been the use of bankruptcy to terminate defined-benefit pension plans, costing the PBGC and airline employees billions of dollars. Only two airlines still offer defined benefit pension plans. The U.S. airline industry has expanded threefold since deregulation. Figure 2 shows that the consumption of airline travel as measured by revenue passenger miles (RPM) grew from 188 billion RPMs in 1978 to 584 billion RPMs in 2005, while airline capacity grew at a similar pace—from 306 billion available seat miles (ASM) in 1978 to 758 billion ASMs in 2005. Over the same period, revenue passenger enplanements increased from 254 million in 1978 to 670 million in 2005. Owing to the growth of air travel, U.S. airlines’ revenues grew almost fourfold in real terms (see fig. 3). However, expenses also grew at a similar pace, sometimes outpacing industry revenues. While profits were relatively stable under regulation, earnings have been increasingly cyclical since deregulation. One explanation for this cyclicality is that, with revenues closely tied to the business cycle, high fixed costs for aircraft, and a rigid and costly labor structure, outside shocks—such as the September 11, 2001, attacks or high fuel prices—make it difficult for the industry to adjust its capacity. The industry has incurred operating losses of nearly $28 billion since 2001, most of this by legacy airlines. These airlines have compensated by taking on additional debt, using all (or nearly all) of their assets as collateral and limiting future access to capital. There have been significant changes to airline employee compensation, employment, and productivity since deregulation. Prior to deregulation, labor was highly unionized and wage demands were typically met. Regulation allowed for increases in labor costs to be passed on to consumers through the regulated fare system. Several studies have estimated that airline wages were greater under regulation than they would have been in a competitive deregulated market. Even so, industry growth, barriers to entry, and union bargaining strength allowed labor to protect its compensation following deregulation. Since 1978, airline industry salaries and total compensation experienced real increases, though with some decline since 2002 (see fig. 4). Inflation-adjusted benefits per employee grew on average from $14,703 in 1979 to $24,852 in 2004, a real increase of almost 70 percent. Meanwhile, inflation-adjusted salaries per employee grew from $52,295 in 1979 to $54,848 in 2004 on average, a real increase of less than 5 percent. Despite this increase in compensation costs, employee compensation as a share of total operating costs has declined since deregulation, especially since 2002 (see fig. 5). This decline in compensation costs as a share of total operating expense is attributable to falling employment levels, to large increases in capacity, and increases in other costs (especially for fuel). Employment began to decline with the industry downturn that began in 2000. As a result, measures of overall industry efficiency (as illustrated by available seat miles per employee in fig. 6) increased significantly. This is attributable to efficiency gains by legacy airlines during and under the threat of bankruptcy, and to more efficient low-cost carriers providing more capacity than previously. Following deregulation, legacy airlines were considerably larger and better financed than the host of small new airlines that entered the market place. Most of the new entrant airlines during the 1980s and 1990s failed. Large legacy airlines were generally able to retain market share despite new entrant airlines because of operating barriers—such as slot controls—and business practices—such as frequent flyer programs—that gave them competitive advantages. Larger and better-capitalized legacy airlines seeking to increase market share acquired weaker airlines—for example, American Airlines’ acquisition of Reno Air. Legacy airlines built up their hub-and-spoke networks, which allowed them to build their traffic flows and fend off potential competitors. We and others reported on the higher fares experienced by passengers that had to use these “fortress hubs.” Legacy airlines also developed regional, national, and international code- sharing arrangements to extend their networks and compete for domestic and international passenger traffic. During the 1990s, we repeatedly reported on these and other barriers to entry that limited competition in the U.S. airline industry. Since the industry downturn that began in 2000, there has been a shift in the airline industry: a weakening of the financial condition of legacy airlines and an increasing market share for low-cost carriers. The consequences of an overburdened cost structure for legacy airlines became apparent after 2000 when demand fell, especially demand from premium-fare business travelers. Low-cost airlines, which generally did not have these cost structures, have been able to increase their market share, while legacy airlines have struggled to bring their costs down. As we reported in 2004, low-cost airlines increased their presence in the top 5,000 domestic city-pair markets by 44.5 percent; from 1,594 markets in 1998 to 2,304 markets in 2003. In 1998, low-cost airlines operated in 31.5 percent of markets served by legacy airlines, providing a low-cost airline alternative to 72.5 percent of passengers. By 2003, low-cost airlines competed directly with legacy airlines in 45.5 percent of markets served by legacy airlines, serving 84.6 percent of passengers in the top 5,000 markets. While legacy airlines began to reduce their operating costs starting in 2001, they did so through capacity reductions and were not able to reduce their unit costs vis-à-vis low-cost airlines that were adding capacity. We warned that legacy airlines could not survive with continued losses. In 2005, two legacy airlines (Delta and Northwest) entered bankruptcy and are currently attempting to reorganize. In 2005, we examined the issue of airline bankruptcy and, in particular, how some airlines were using bankruptcy to terminate their defined benefit pension plans. We found that bankruptcy has been endemic to the airline industry since deregulation, with 162 bankruptcy filings since 1978, owing to the fundamental financial weaknesses of the airline industry. Despite the prevalence of bankruptcy, however, we found no evidence that bankruptcy harmed the airline industry by contributing to overcapacity or by underpricing. Nevertheless, we expressed concern about the use of bankruptcy to terminate defined benefit pension plans because of the costs to the federal government as well as to employees and beneficiaries. USAirways and United, subjected to intense cost pressures from growing low-cost airlines like Southwest, entered bankruptcy and terminated their labor contracts and pension plans. The pension plan terminations cost PBGC nearly $10 billion and plan participants lost more than $5 billion in promised benefits that are not covered by PBGC. If Delta and Northwest, which entered bankruptcy in 2005, similarly terminate their pension plans, the costs to PBGC and plan participants will be even greater. At present, only American Airlines and Continental have active defined benefit pension plans, while the remaining airline plans are either terminated or frozen. In total, active and frozen airline plans were underfunded by almost $15 billion at the end of 2005, according to Securities and Exchange Commission filings. Airfares have fallen in real terms over time, with round-trip median fares almost 40 percent lower since 1980. However, fares in short-distance markets (less than 250 miles) and “thin” markets (the bottom 20 percent of passenger traffic) have not fallen as much as those for longer distances or in heavily traveled markets. Price dispersion—that is, the extent to which passengers in the same city-pair market pay different fares—has also declined since 2003, likely indicating consumers’ unwillingness to pay the very high fares airlines were able to charge in the late 1990s. The extent to which these benefits are attributable to deregulation as opposed to other factors, such as advances in technology, is uncertain. Various studies have attributed significant consumer benefit to deregulation, but estimating this benefit depends on several major assumptions and is not free of controversy. The decline in fares coincided with a growth in passenger traffic and increased competition over the period. While large communities and markets have experienced large gains in the number of passengers and service, as well as increased competition, small communities and markets have experienced much smaller gains. On average, however, the number of competitors in city-pair markets grew from 2.2 in 1980 to 3.5 in 2005. Our analysis of DOT’s ticketing data from 1980 to 2005 shows substantial decreases in median fares since 1980, with an overall decrease of nearly 40 percent for median round-trip fares since that time. In addition, our analysis shows a convergence of fares across trip distances, although substantial differences in fares by trip length and by market size remain. In recent years, passengers flying long distances or in medium to large markets have paid much lower fares as compared with 1980 fares, while those flying in smaller markets or over shorter distances today have seen a smaller reduction in fares as compared with 1980 fares. Finally, the difference between the fares paid by customers flying within the same routes began to decline in 2003, after increasing in the years following deregulation. Overall, median round-trip fares have declined 38 percent since 1980, falling from $414 to $256. The largest decreases occurred in the late 1980s, but the overall trends have continued down in subsequent years. Figure 7 provides information about median round-trip fares. Median fares have converged when compared by the distance traveled since deregulation. In 1980, median fares ranged from $680 for trips longer than 1,500 miles to $230 for trips of 250 miles or less— reflecting the pricing structure in place under regulation, which linked fares to costs while subsidizing shorter routes. Since that time, however, fares have converged toward the low end of this range, with the longest trips now averaging just $326, a drop of 52 percent. Median fares for the shortest trips, in contrast, have not fallen as much. For trips of 250 miles or less, median fares have fallen 13 percent to $201. Figure 8 provides information about median fares by distance categories. The size of the market has also affected how fares have changed since deregulation. The smallest markets continue to have the highest average fares, and have seen the smallest reduction in these fares (see fig. 9). In 1980, passengers flying in the smallest markets paid $412 on average for their tickets, while those flying in the largest markets paid $329. By 2005 average fares in the smallest markets had fallen 16 percent to $348, while passengers in the other markets we analyzed saw their fares fall 26 percent or more on average. Examples of city pairs in the smallest-market category in both 1980 and 2005 include the Atlanta, Georgia–Joplin, Missouri route; and the Great Falls, Montana–Sacramento, California route. In contrast, the Boston, Massachusetts–New York, New York route; and the Chicago, Illinois–Los Angeles, California route, were in the largest-market category in both 1980 and 2005. While median fares trended down steadily after deregulation, the differences in the prices paid by individual customers in the same city-pair market grew, most notably in the 1990s with the increased use of yield- management systems by airlines. The dispersion of fares began to decline in 2003, however, when changes in the overall economy and a decline in the willingness of some passengers to pay higher fares for premium service—notably business passengers—likely combined with the increased use of the Internet for ticket purchases to reverse some of the prior increases in ticketing variation. Since then, the variability of fares has decreased, meaning that fares for most tickets sold are now generally more similar to average fares. Figure 10 illustrates the coefficient of variation, or dispersion, of round-trip yields. Many studies have estimated that consumers have benefited from deregulation. Assessments of these benefits, however, vary substantially as have the methodologies used. One approach is to calculate the difference between actual fares and a benchmark proxy measure of what fares might have been had the industry remained regulated. Any differences are then attributed to the effects of deregulation. Some studies using this approach have used the Standard Industry Fare Level (SIFL) to approximate the regulated fare and concluded that consumers as a whole have benefited from lower fares resulting from deregulation. For example, in 2005 Rose and Borenstein compared postderegulation fares to the SIFL and estimated that 2004 fares were about 30 percent lower than what the comparative regulated fares would have been, resulting in a $5 billion savings to passengers that year. Likewise, Winston and Morrison used the same proxy in 1995 and estimated that real fares declined about 33 percent from 1976 to 1993. After adjusting the SIFL data to account for presumed productivity gains and increased load factors, they estimated that, on average, deregulation led to fares 22 percent lower than they would have been in a regulated environment, resulting in an annual savings of about $12.4 billion in 1993 dollars over the same period. While pointing to declines in overall fares, these studies also indicated that benefits have been unevenly distributed by market size and route length. In fact, those traveling on heavily traveled routes are likely to be paying less than they would have paid under a regulated system, and those flying on shorter-distance routes are likely to be paying more. Some experts have questioned the extent to which deregulation can be credited for decreases in airfares since 1978, and draw attention to the difficulty in measuring impact. First, a former CAB and DOT official, who participated in CAB route awards and fare determinations and later calculated the SIFL for DOT, points out that the fare ceilings used by CAB under regulation—calculated as the Domestic Passenger Fare Investigation (DPFI)—were more complicated than their proxies. Rose and Borenstein also acknowledged that using the SIFL as a proxy for the CAB regulated fare may be increasingly implausible, given that it is unlikely that the same cost assumptions would have been used for the 27 years following deregulation. As a result, using the SIFL to approximate airline fares under regulation may overestimate the savings resulting from deregulation. For example, while the DPFI fare calculations took several factors into account, including depreciation and capacity, the SIFL calculations primarily consider airline costs. The former DOT official further noted that the DPFI calculations allowed for discounted fares if load factors were increased to offset the fare reduction, something not reflected by the SIFL fare. Second, some experts have pointed out that fares were already declining before deregulation, thus making it difficult to attribute changes in the industry to deregulation rather than improvements in productivity and other factors. In fact, real average fares paid per mile (yields) since 1962 do show a steady decline, reflecting both CAB fare setting flexibility and cost-savings following the introduction of jet service in the early 1960s, but without a sharp break in 1978 following the deregulation of the industry (see fig. 11). As predicted by deregulation, airline city-pair markets have become more competitive since deregulation. As shown in figure 12, the average number of effective competitors (any airline that carries at least 5 percent of the traffic in that market) in any city pair increased from 2.2 in 1980 to 3.5 in 2005. By 2005, 76 percent of the city-pair markets we analyzed had three or more carriers compared with 34 percent of all city-pair markets in 1980 (see fig. 13). By contrast, the percentage of city-pair markets with only one carrier decreased from 20 percent in 1980 to 5 percent in 2005. As these two figures show, most of the increase in competition occurred during the 1980s, just after deregulation. Longer-distance markets are more competitive than shorter-distance markets, some of which have lost competitors since 1980. While city pairs with a distance of over 1,500 miles have seen an increase in the average number of carriers from 2.3 in 1980 to 4.2 in 2005, markets shorter than 250 miles have seen a decrease from 1.6 in 1980 to 1.4 in 2005 (see fig. 14). This difference exists in large part because longer-distance markets have more viable options for connecting over more hubs. For example, a passenger on a long-haul flight from Harrisburg, Pennsylvania, to Seattle, Washington (a distance of over 2,000 miles), would have options of connecting through six different hubs, including Cincinnati, Chicago, and Detroit. By comparison, a passenger from Harrisburg to Rochester, New York (a distance of just over 200 miles), has three viable connecting options. Passenger traffic, already concentrated in relatively few city-pair markets in 1980, has become more concentrated. In 1980, 80 percent of passenger traffic occurred in the largest 14.1 percent of all city-pair markets, but by 2005, that same percentage of traffic occurred in the largest 10.7 percent of all city-pair markets (see fig. 15). While large markets have seen substantial gains in traffic, smaller markets have not, and in many cases have actually seen declines in traffic since deregulation. For example, while the number of passengers flying between Washington, D.C., and Los Angeles grew 327 percent between 1980 and 2005 in our sample, the number traveling between Boston and Cedar Rapids, Iowa, decreased 49 percent. The number of city-pair markets has increased modestly since 1980. Largely owing to an overall growth in traffic, the number of city pairs with at least 13 passengers in the sample per quarter (which equates to about 130 actual passengers per quarter) increased by over 3,800 city-pair markets between 1980 and 2005, from about 8,500 to over 12,300 (see fig. 16). However, few cities have gained air service since deregulation because the airport system was already largely developed at the time of deregulation, so the number of cities that could be connected would not be expected to have changed much since deregulation. Instead, many city- pair markets that could be connected did not have enough actual passengers reflected in the sample data to be counted. Smaller communities, in general, have not experienced the same increases in traffic and air service as larger cities since deregulation—particularly in recent years, when many small cities lost service or experienced a decline in the number of departures. For example, we reported in 2005 that while large, medium, and small-hub airports have seen traffic rebound since September 11, 2001, nonhub airports had 17 percent fewer flights in July 2005 than in July 2000. Additionally, we reported in 2002 that traffic at EAS communities decreased 20 percent from 1995 to 2002. However, lack of service for small communities is not solely a problem of the deregulated era. We reported in 1985 that in the 10 years leading up to deregulation, 137 small communities lost all commercial air service. The primary reason for diminished service to smaller communities is the lack of a population base to support that service. Local air traffic is directly related to both local population and employment. For small communities located close to larger cities, these demand reductions are exacerbated because local passengers drive to airports in larger cities to access better service and lower fares. We reported in 2002 that some EAS airports serve only about 10 percent of the intercity traffic to and from their city because many travelers instead drive to alternative airports or to their destination. Small communities have not benefited from the service of low-cost carriers; as we reported in 2005 only 5 of over 500 nonhub airports received low-cost carrier service. Lack of service from low-cost airlines can partially explain why small cities also face relatively higher fares than larger cities do. Similarly, longer-distance markets have seen greater gains in traffic than shorter-distance markets. Passengers on routes of 1,500 (or more) miles increased 312 percent between 1980 and 2005, while passengers on routes between 250 and 499 miles grew 68 percent in our sample. For example, while traffic between Dallas-Fort Worth, Texas, and Hartford, Connecticut—a distance of 1,470 miles—grew 477 percent between 1980 and 2005, traffic between New York and Raleigh-Durham, North Carolina—a distance of 427 miles—fell 19 percent in our sample. Short- distance markets lost a large share of their passengers after September 11, 2001, in part because the increased time required for security measures makes driving a more viable alternative. The frequency of short-haul flights has also decreased. DOT found that the number of scheduled flights under 250 miles decreased 26 percent between July 2000 and July 2005, while the number of flights of over 1,000 miles increased by 15 percent during that time. Our analysis indicates that the average number of connections needed, at a minimum, to connect any two cities has increased since 1980. Figure 17 shows the percentage of all city-pair markets in our sample with at least 13 passengers per quarter (or 130 actual passengers) that can be connected nonstop, with one connection, or with two connections. Very few city- pair markets currently require two connections. The average number of connections needed to connect any two city-pair markets increased from 1.6 in 1980 to 1.7 in 2005, which is likely attributable to the development of hub-and-spoke networks to connect airline traffic. For some passengers this development has increased the number of connections needed. For example, in 1980, passengers traveling between Philadelphia, Pennsylvania, and Tulsa, Oklahoma, could fly nonstop, but by 2005 one connection was required. While there may have been declines in nonstop connectivity for many small city-pair markets, the overall ability of passengers to connect to wider markets through hubs has likely improved. The shift from shorter-range turboprop planes to longer-range regional jets has allowed cities that are too small to support mainline jet service, but too far from hubs for turboprop service, to be connected to hubs, increasing the number of one-connection city-pair opportunities. The largest markets are generally served by nonstop service. In 2005, 88 percent of passengers traveled in city-pair markets that included nonstop service and less than 1 percent of passengers traveled in city-pair markets that required two connections. However, because many passengers in directly connected markets may choose to fly with a connection (e.g., in exchange for a lower fare), the actual number of passengers flying without a connection is lower. For example, while passengers flying between Seattle and Tampa, Florida, could fly nonstop in 2005 (and were able to in 1980), they could also choose to connect through a number of hubs, including Chicago, Atlanta, and Denver, Colorado, for a number of reasons. Our data do not distinguish between passengers who flew with one or two connections out of necessity (e.g., because of no better option in their market) or voluntarily when a direct flight was available. Additionally, the development of hubs has helped bring about increases in flight frequencies, allowing some passengers taking connecting flights to benefit from better flight times and reduced connection times. As another means of measuring changes to connectivity over time, we calculated a flight distance ratio. This ratio, also known as “circuity,” measures the total miles flown on a trip (adding up the distance of all segments of a flight) divided by the distance between origin and destination. A nonstop flight would have a ratio of 1, and a ticket with at least one stop would have a higher ratio the farther out of the way the connections were between origin and destination. Figure 18 shows that, since 1980, the flight distance ratio has slowly risen. Much of this increase is likely due to the increased use of connecting flights. By other measures of airline service---not covered by DOT’s Origin and Destination Survey data such as flight frequencies, flight delays, and amenities---service has been mixed. For example, in 1999 we reported that medium and large communities had significant improvements in their number of departures, nonstop destinations served, and use of jet service since deregulation. However, by other measures, service has deteriorated, especially in recent years as traffic has rebounded. For example, DOT has reported that 77.4 percent of flights arrived ontime in 2005, compared with 82.1 percent in 2002 and 79.4 percent in 1990. Additionally, DOT reported that the agency received almost 7,000 consumer complaints in 2005, an increase of over 50 percent from 2003. According to our analysis of the evidence, reregulation of airline entry and rates would not benefit consumers and the airline industry. Although some aspects of customer service might improve, reregulation would likely reverse many of the gains made by consumers, especially lower fares. While numerous industries have been deregulated over the last 30 years, very few have been reregulated. We found that the few instances in which an industry was reregulated stemmed from inadequate competition, such as occurred in the cable television industry after it was deregulated. Lack of competition has not been the case in the airline industry, where competition has been keen. Our analysis of fares and service since deregulation provides evidence that consumers have benefited over the intervening years. While it is impossible to accurately calculate these gains because no regulated system exists against which to compare deregulated fares, deregulation has corresponded with increased competition in the airline industry, which has likely contributed to lower fares and a larger airline market than might have prevailed without it. Reregulating the airline industry would have ramifications reaching far beyond the fare and service effects on airline passengers and communities. For example, the higher fares for airline travel that would likely result from reregulating the industry could shift some of the nation’s 670 million domestic airline passengers to other modes of transportation that are neither as safe nor efficient as air travel, and considerable infrastructure investment would be required to handle the increased demand. Restoring service to some small communities is an insufficient reason to reregulate airline entry and rates. We previously reported that small communities face a range of fundamental economic challenges in attracting and retaining commercial air service. Among these challenges is the lack of a population base or economic activity that could generate sufficient passenger demand to make service profitable to airlines. Smaller communities located near larger airports may also face reduced demand because they do not have low-cost airlines or frequent service. Despite these challenges, smaller city-pair markets have generally experienced lower fares since deregulation—just not to the degree that the largest city- pair markets have. The smallest city-pair markets in our analysis have also experienced a net gain in the number of connections and in overall traffic since deregulation. If Congress determines that these markets are underserved, it might more directly address service to small communities through targeted legislation—such as increasing subsidies for EAS—than through wholesale reregulation. Finally, reregulating the airline industry would not salvage airline pensions. Legacy airlines’ financial problems are the result of the same competitive forces that contributed to lower fares for consumers. The demise of airlines since deregulation has been endemic to the airline industry, as more efficient airlines have taken market share from less efficient airlines. As we found in our 2005 report on airline bankruptcies and pension problems, pension losses were attributable to market forces, poor airline management and union decisions, and inadequate pension funding rules—including insufficient funding requirements and the inadequate relationship between premiums paid by plan sponsors and PBGC’s exposure to financial risk. These factors also led to the termination of pensions in other industries with large legacy pension costs, such as steel. Increasing fares via government-imposed price floors similar to those that existed prior to 1978 would be an inefficient means of ensuring that airlines would generate sufficient revenues to adequately fund their pension plans, especially when most airlines no longer offer defined benefit plans. Congress is currently considering changes to defined benefit pension regulation, including specific provisions that would grant airlines additional time to fund frozen defined benefit plans and thereby avoid plan terminations. We have previously recommended that Congress consider broad pension reform that is comprehensive in scope and balanced in effect. We provided a draft of this report to DOT for its review and comment. DOT officials provided some clarifying and technical comments that we incorporated where appropriate. We provided copies of this report to the Secretary of Transportation and other interested parties and will make copies available to others upon request. In addition, this report will be available at no charge on our Web site at http://www.gao.gov. If you or your staff have any questions on matters discussed in this report, please contact me on (202) 512-2834 or at heckerj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report can be found in appendix II. To assess the original intent of Congress in passing the Airline Deregulation Act, we reviewed the act and accompanying legislative materials, and various other documents and studies. To ascertain the legislative intent of Congress in deregulating the airline industry, we reviewed the act, legislative reports, and floor debates. We also reviewed related court cases and studies and historical accounts of airline deregulation. To evaluate past changes in the airline industry, we reviewed Department of Transportation (DOT) studies, our own studies, analyzed financial and operational data, and interviewed industry experts. We analyzed airline financial and operational data from DOT’s Form 41 data set. We obtained these data from BACK Aviation Solutions, a private contractor that provides online access to U.S. airline financial, operational, and passenger data that are reported by airlines to DOT. To assess the reliability of these data, we reviewed the quality control procedures applied to the data by DOT and BACK Aviation Solutions and subsequently determined that the data were sufficiently reliable for our purposes. To analyze changes to airline fares and service since deregulation, we used data from DOT’s Origin and Destination Survey. Begun in 1979, the Survey captures airline-reported information about the full itinerary and fare paid from every tenth ticket to DOT. The survey does not include data on flight frequency, aircraft type, flight amenities, or other data that could be used to measure airline service. In the fourth quarter of 1998 DOT changed the name of the database from DB1A to DB1B and began collecting an additional data field to distinguish between the carrier that issued the ticket from the carrier that operated the flight (e.g., a flight operated by Air Wisconsin as a US Airways Express flight, connecting to a US Airways Express flight, connecting to a US Airways mainline flight—all issued by US Airways under the “US” code). To assess the reliability of these data, we reviewed the quality control procedures applied to the data by DOT and subsequently determined that the data were sufficiently reliable for our purposes. We analyzed these data for the period from 1980 through the second quarter of 2005. We did not include 1979 data in our analysis because DOT staff reported that these data were not reliable, since many airlines had difficulties reporting data in the first full year of deregulation. We limited our analysis to data reported for the second quarter of every calendar year in order to avoid data reflecting increased summer travel or reduced winter travel. Furthermore, we limited our analysis to passenger itineraries wholly within the continental 48 states; thereby excluding international itineraries and any travel to airports in Alaska, Hawaii, and U.S. dependencies. We excluded international fares and foreign carriers because international markets were not deregulated when domestic markets were. We excluded flights to or from Alaska, Hawaii and U.S. territories because of the long distances involved. In general, we limited our analysis to a subset of round-trips and certain one-way trips between city pairs. We defined markets by city pairs rather than airport pairs. For cities served by multiple airports (e.g., the Dallas area includes both Dallas-Forth Worth International Airport and Dallas Love Field), we recoded all airports in the city to the one with the most enplanements. Thus, we identified round trips as those for which the final city on the ticket was the same as the originating city (even if the passenger record indicated, for example, that the trip originated at Dallas- Fort Worth and returned to Dallas Love Field). One-way trips were those in which no two cities in the ticket matched one another. We included only round trips involving two, four, or six flight segments (coupons). These represent round trip itineraries that have no stops, one stop, or two stops in both directions. In counting the number of coupons used in each direction of a flight (i.e., outbound or return), we relied on the “trip break” codes that DOT assigns. These codes indicate the point in a passenger’s itinerary at which the passenger begins the return trip. Because the data originally reported by the airlines do not unambiguously identify the point on a round trip at which the passenger begins the journey home, DOT applies an arithmetic algorithm that identifies the point in the itinerary farthest from the point of origination. However, because DOT’s trip break codes may incorrectly identify the destination airport, we eliminated any tickets that had an unequal number of coupons before and after the DOT-assigned trip break. Thus, we eliminated all 3- and 5-coupon round trip tickets (e.g., one in which a passenger flies nonstop from Boston to Phoenix, Arizona, then to Denver, and back to Boston). On the other hand, we included records for roundtrips that had equal numbers of outbound and return coupons, but connected over different airports on the outbound and return segments (for example, New York to Los Angeles connecting in Chicago westbound and in Dallas-Fort Worth eastbound). We analyzed changes in fares and yields in inflation-adjusted 2005 dollars, using the chain-weighted price index for gross domestic product. To compute the yield for every ticket, we divided the inflation-adjusted fare paid by the total distance between origin and destination for a one-way ticket or by double the distance between origin and destination for a round-trip ticket. We excluded tickets with unusually high fares (i.e., those with yields in excess of $3 per mile in 2005 dollars), because according to industry researchers, these fares are likely to indicate data errors. We retained tickets in the analysis when the fare paid was $0, indicating trips “purchased” with frequent flyer rewards. For our analyses of changes in fares and service, we divided city pairs into categories based on distance or passenger density. To determine the distance between city pairs, we calculated the distance between airports using the latitude and longitude of their locations. We then grouped all city pairs into 250-mile or 500-mile increments. We also determined the total number of sample passengers in each market. We then ranked, for each year, all markets by the number of passengers and grouped the markets into quintiles, in which each quintile had an even number of passengers. Because the number of passengers in each market changed from year to year, the specific markets in each quintile also changed from year to year. Our analysis of service measures was conducted by only counting city-pair markets with at least 13 passengers per quarter in our sample, which equates to about 130 actual flying passengers. This was to increase the probability that changes in service we observed in our sample reflected actual flow routes and was not due to sampling or data error. We defined “service” in terms of connectivity and the number of competitors in a market. We measured connectivity in two ways: the minimum number of flight segments available to connect two cities and the extent to which passengers needed to connect over distant hubs to reach their destination. We identified the minimum number of connections needed to connect any two cities and also determined whether that number changed over time. Additionally, because some passengers will choose to connect between two cities rather than take nonstop flights (e.g., because fares may be cheaper or the schedules may be more convenient), we weighted the coupons by passenger traffic to establish how most passengers traveled in the city pair. To determine whether passengers could fly more or less directly to their destinations, we calculated the distance between origin and destination along with the distance of every segment on the ticket. We then divided the sum of the segment distances by the distance between origin and destination (or twice that distance if the flight was a round trip) to estimate how far out of the way the travelers went. To analyze competition within markets and over time, for every city pair, we determined the market share for each reporting carrier, based on ticketed passengers, and counted only those carriers with at least 5 percent of the market as effective competitors. We excluded tickets with interlined flights in our analysis of city-pair competition. An “interlined flight” is one in which a passenger transfers from one to another unaffiliated carrier. That is, the passenger travels on at least two different reporting carriers. When analyzing city pairs for competition, we only analyzed those city pairs that, in any given quarter, had a minimum of 118 passengers in our sample (equaling a minimum of 1,180 real passengers in the market). This passenger minimum was derived to provide us an acceptably low probability of misclassifying carriers as effective competitors, that is, as having a 5 percent market share. For various scenarios, with this market size threshold, the probability of correctly classifying carriers was at least 95 percent. We recognize that many other dimensions of service quality exist. In the past, we have reported changes in service quality in terms of available capacity out of particular cities, whether service was provided with jets or turboprop aircraft, and how many locations a passenger from a given city could reach on a nonstop basis. In addition, DOT collects other information on service quality, such as the percentage of on-time arrivals and departures and the number of consumer complaints about airlines. Because of time constraints on this engagement, we were unable to incorporate more of these dimensions in our analysis. When DOT began requiring the survey data by airlines, Southwest Airlines received a waiver that allowed it to report data differently, because of its unique ticketing procedures, whereby it issued only one-way tickets. Under the waiver, Southwest reported its round trips to DOT as two separate trips, which were included in DOT’s DB1A or DB1B databases. Southwest maintained this waiver until the third quarter of 1998, when it was required to report ticket data more accurately, including both directions of a ticket. During the period covered by the waiver, the number of one-way tickets in the sample was unnaturally high. Recognizing that the data could be biased as a result, we reanalyzed our sample data without the Southwest data and found that the results were only marginally different. Median round-trip fares since 1999 have been between $17 and $25 lower with the inclusion of the Southwest Airlines fares than they would have been without the Southwest fares. Therefore, we did not exclude Southwest tickets after 1999 from our analysis of fares. To determine whether there is sufficient evidence to support reregulating the airline industry, we considered our findings under the prior questions and our earlier reports, especially those relating to pension regulation. We reviewed and updated the status of airline pension plans and assessed examples of deregulation and reregulation in other countries and in other industries. In addition, we reviewed our prior reports that have evaluated past problems in the airline industry, including small community service, barriers to entry, fare and service problems, and financial problems, including bankruptcy and pension issues. For this and the prior report questions, we reviewed our methods and results with DOT and academic experts from the Massachusetts Institute of Technology’s Global Airline Industry Program. We performed our work between September 2005 and May 2006 in accordance with generally accepted government auditing standards. In addition to the contact named above, Steven C. Martin, Assistant Director; Paul Aussendorf; Jay Cherlow; David Hooper; Colin Fallon; Mitch Karpman; Molly Laster; Sara Ann Moessbauer; and Mathew Rosenberg made key contributions to this report. | The Airline Deregulation Act of 1978 phased out the government's control over fares and service and allowed market forces to determine the price and level of domestic airline service in the United States. The intent was to increase competition and thereby lead to lower fares and improved service. In 2005, GAO reported on the tenuous finances of some airlines that have led to bankruptcy and pension terminations, in particular among those airlines that predated deregulation (referred to as legacy airlines). The House Report accompanying the 2006 Department of Transportation (DOT) Appropriation Act expressed concern about airline pension defaults and charged GAO with analyzing the impact of reregulating the airline industry on reducing potential pension defaults by airlines. GAO subsequently agreed to address the pension issue within a broad assessment of the airline industry since deregulation. Specifically, GAO is reporting on, among other things, (1) broad airline industry changes since deregulation, (2) fare and service changes since deregulation, and (3) whether there is evidence that reregulation of entry and fares would benefit consumers or the airline industry, or save airline pensions. DOT agreed with the conclusions in this report. GAO is making no recommendations in this report. The airline industry has undergone significant change since the late 1970s. Industry capacity and passenger traffic have tripled. At the same time, the industry's profitability has become more cyclical, and the financial health of large legacy airlines has become more precarious. Legacy airlines emerged from a regulated environment with relatively high structural costs, driven in part by labor costs, including defined benefit pension plan costs. Over the last few years, facing intense cost pressures from growing low-cost airlines like Southwest, both United and US Airways entered bankruptcy, voided labor contracts, and terminated their pension plans costing the Pension Benefit Guaranty Corporation, the federal government insurer of defined benefit plans, $10 billion and beneficiaries more than $5 billion. In 2005, two other legacy airlines entered bankruptcy leaving their pension plans in doubt. Only two airlines still have active defined benefit pension plans. Airfares have fallen in real terms over time while service--as measured by industry connectivity and competitiveness--has improved slightly. Overall, the median fare has declined almost 40 percent since 1980 as measured in 2005 dollars. However, fares in shorter-distance and less-traveled markets have not fallen as much as fares in long-distance and heavily trafficked markets. Since 1980, markets have generally become more competitive; with the average number of competitors increasing from 2.2 per market in 1980 to 3.5 in 2005. The evidence suggests that reregulation of airline entry and fares would likely reverse much of the benefits that consumers have gained and would not save airline pensions. The change in fares and service since deregulation provides evidence that the vast majority of consumers have benefited, though not all to the same degree. Although a number of airlines have failed and some have terminated their pension plans, those changes resulted from the entry of more efficient competitors, poor business decisions, and inadequate pension funding rules. GAO has previously recommended that broad pension reform is needed. |
The Army Workload and Performance System (AWPS) is intended to resolve long-standing systemic problems in the Army’s civilian manpower requirements determination process. It is an information and reporting system that draws production and manpower data from other existing programs, including the Army’s Standard Depot System. Its main purpose is to provide decision support tools for linking workload demands to manpower requirements and the budget process. The system was initially installed at Corpus Christi Army Depot, Texas, in June 1996. Since then, it has been put into operation at the Army’s four other maintenance depots—Anniston, Letterkenny, Red River, and Tobyhanna. In 1999, the Assistant Secretary of the Army certified the system as fully operational for the maintenance mission at the five maintenance depots. The Army is moving forward with the installation of AWPS at all of its logistics and industrial activities. To date the system is being used as a decision-making tool in other functional areas, including ammunition logistics, base operations, materials usage, working capital fund budgets, and reporting of net operating results. The Secretary of the Army has directed that AWPS be used throughout the Army as the standard Armywide mechanism for determining manpower requirements for all of its logistics and industrial activities. The first AWPS master plan, submitted to Congress in April 1999, described the Army’s progress and future plans for developing and implementing the system. In our November 1999 report regarding that master plan, we pointed out that the information it contained was limited, and we recommended that the Army develop a more substantial master plan that incorporated all applications for which the system was to be implemented, along with their priorities, costs and benefits, and proposed schedules. We also recommended that the Army make improvements in the existing management and oversight structures. The House Report to the National Defense Authorization Act for Fiscal Year 2001 required the Army to submit a revised master plan, incorporating our recommendations, by February 2001. Subsequently, section 346 of the National Defense Authorization Act for Fiscal Year 2002 required the Army to submit an annual progress report on its implementation of the revised master plan. Section 346 also required that these reports specifically address any changes made to the master plan since the previous report. In December 1999, the Army contracted with the Computer Sciences Corporation to create the Logistics Modernization Program, which is a new information system for managing the Army’s supply, maintenance, and transportation functions. This system, initially called the Wholesale Logistics Modernization Program, will replace the existing Standard Depot System and many other source data systems, several of which provide data to AWPS. The Logistics Modernization Program is designed to improve readiness and logistics support to the war fighter by (1) reducing requisition response times, (2) improving the availability of supplies, (3) optimizing the use of inventory, and (4) responding more quickly to changing customer requirements. The milestones to the first deployment of the Logistics Modernization Program are shown in table 1. Once the Logistics Modernization Program becomes operational at the maintenance depots, the Army plans to shut down many of the old information systems that currently support AWPS and it will become the primary source for the data that AWPS needs to function. As a result, section 346 of the National Defense Authorization Act for Fiscal Year 2002 encouraged the Army to set up a process that would permit or enhance data sharing between the two systems. To ensure that the Army’s AWPS capabilities remained intact, section 346 also mandated that the Army retain AWPS as its standard servicewide manpower system, under the Secretary of the Army’s supervision and management. This mandate was further underscored in a letter dated August 9, 2001, from several congressional representatives to the Commander of the U.S. Materiel Command, which further requested that the Army refrain from incorporating the new system into the Logistics Modernization Program. The Army’s May 2002 report on its workload and performance system does not contain the information that Congress needs to assess the Army’s progress in implementing the system. In response to the requirement for a progress report, as specified in section 346 of the Fiscal Year 2002 National Defense Authorization Act, the Army submitted an updated version of its May 2001 master plan. This updated version did not identify or explain the changes that the Army had made to the master plan since the May 2001 version. In addition, the Army’s report did not contain certain cost, schedule, and performance information that would normally be expected. Moreover, the Army’s report did not fully discuss the potential duplication and overlap in functions performed by the Logistics Modernization Program and the workload and performance system. Although required by section 346, the Army’s 2002 report did not address the changes made to the milestones or tasks set out in the May 2001 AWPS master plan. Appendixes II and III provide tables showing the milestones and tasks identified in both the 2001 and 2002 reports. In comparing the two reports, we found that several milestones had been changed, but the 2002 report did not identify these changes nor did it provide a detailed discussion of the reasons for these changes or their significance. For example, in its 2001 report the Army had scheduled Corpus Christi Army Depot as the first site to prototype the Net Operating Result capability, beginning in August 2001. We found, however, that in the 2002 report this task was set back by 1 year—to the fourth quarter of fiscal year 2002. The same task was also scheduled to be prototyped at one of the ammunition sites by March 2002, but this milestone was later delayed by about 1 year until sometime between January and March 2003. In each case, the 2002 report did not provide an analysis or explanation for the scheduling change. We also found discrepancies between the two reports related to the phasing of certain tasks involved in implementing the new system. Some tasks that were assigned to a specific phase in the 2001 report were moved to a different phase in the 2002 report, and there was no discussion of why these changes were made or what their impact on the overall implementation schedule might be. For example, phase 1 of the 2001 report involved only the consolidation of ongoing implementation actions, whereas in the 2002 report phase 1 also included non-Army Material Command maintenance activities. The 2002 report, however, does not clearly address the status of tasks previously listed under phase 1. The Army’s 2002 plan does not contain the cost, schedule, and performance data that might normally be expected. For example, according to the Department of Defense’s (DOD) Regulation 5000.2-R, progress reports related to the acquisition of major new automated information systems should contain detailed information on such key parameters as cost, schedule, and performance. Army officials stated that the scope and cost of the AWPS system does not meet the minimum threshold to be considered a major information system and, thus, the regulation does not apply to it. While we agree that the AWPS system does not meet the threshold requirements of the regulation, we believe certain criteria in the regulation would provide Congress with the necessary information to properly evaluate the AWPS system and should therefore be addressed in the Army’s progress reports. Consequently, we have analyzed the AWPS report using criteria from the regulation. Additionally, the Clinger-Cohen Act of 1996 requires agencies to have investment management processes and information to help ensure that information technology projects are being implemented at an acceptable cost and within a reasonable and expected time frame. In effect, these requirements and guidance recognize that one cannot manage what one cannot measure. Finally, in our November 1999 report on the Army’s original master plan for AWPS, we identified several shortcomings, including the lack of detailed information on costs and expenditures, milestones, and performance. We recommended in that report that the Army develop a more substantive master plan that included priorities, costs and benefits, and schedules. In our analysis of the Army’s 2002 plan, we found that, while it addresses some of these elements, it does not provide the detailed or complete data that is needed to adequately assess the Army’s progress in implementing the workload and performance system. As table 2 shows, the 2002 plan contained information on a few parameters identified in DOD’s guidance, including direct costs; dates for certain events, such as reaching initial operating capabilities; and objectives for operational requirements. However, it did not include information on a large number of parameters, such as total procurement costs, critical schedule dates, and measures of performance. In addition, the 2002 report did not contain necessary cost, scheduling, and performance data for the individual tasks that the Army has assigned to each implementation phase. Phase 1, implementation of the workload and performance system at non-Army Materiel Command maintenance depots; phase 2, expansion of the system into nonmaintenance missions (e.g., base operations, medical); and phase 3, development of decision- support tools for use at the major command and headquarters levels (e.g., working capital fund budget, links to depot maintenance operational system, and cross-organizational activities). As table 3 illustrates, the Army’s report contained cost, scheduling, and performance information for only a small number of these tasks. Furthermore, we could only identify specific costs for one of the tasks and, in most cases, the milestones and performance measures were too broad and did not include interim measures and specific performance targets to measure progress. While the Army’s May 2002 report provided some estimated funding requirements for AWPS for fiscal years 2004 through 2006, it did not contain the detailed information that could be used to assess the costs of implementing the system thus far and the costs of expanding it into other functional areas in the future. According to the Army Materiel Command, the total estimated costs for the AWPS program were about $44.8 million for fiscal years 1996 through 2002, and the estimated program costs for fiscal year 2003 are about $8.9 million. The primary source for this funding has been the Army’s working capital fund. These figures and the funding sources, however, were not included in the Army’s report. In addition, the Army’s report did not identify the extent to which actual expenditures relate to the budgeted amounts. The report also did not provide any cost estimates for funding the Army’s plan to expand AWPS to other nonmaintenance activities, such as base operations support. According to Army officials, these expansion plans will require funding through the Army’s appropriated operations and maintenance accounts. In its report, the Army estimated that it would need about $20.1 million over the next 3 fiscal years (2004 through 2006), to ensure that the remaining tasks are implemented. Table 4 shows the Army’s projected costs for fiscal years 2004 through 2006, which were included in its May 2002 report. According to the report, these future year costs are unfunded and the Army has not yet identified funding sources for them. These officials stated that, other than the funding that has been provided through the working capital fund, the department has not adequately funded the AWPS expansion effort in recent years and that this lack of funding has hampered their ability to plan and implement further expansions. As table 4 indicates, the Army did not provide a detailed cost analysis regarding the historical and projected costs for AWPS, nor did it provide a complete summary of the estimated costs to complete the tasks listed for each phase. Specifically, the table includes cost estimates for additional system implementation (phase 1) and for the development of decision support tools (phase 3), but it provides no specific estimates for expanding the system into other functional areas (phase 2). Additionally, the Army did not include the associated costs to support the development of all the specific tasks required to complete each phase. The Army’s May 2002 report contained only limited information on the milestones established to implement the new system and no data on whether earlier milestones had been reached, thereby making it difficult to assess the progress of the system’s development and implementation. Specifically, the report lacked schedules that include implementation and completion dates and interim milestones. For example, the Army is updating the Workload and Performance System applications from the original programming language to a more up-to-date programming language. According to the Army, this upgrade has been installed at all five maintenance depots and will be installed at other installations between May 2002 and May 2003. However, specific dates for implementing or completing this upgrade were not included in the May 2002 report. In another example, the Army indicates that it intends to install AWPS at other nonmaintenance activities outside the Army Materiel Command, but it does not provide specific milestones for each location or the specific tasks associated with the development and installation process. As shown in appendix III, the Army has established expected completion dates for some of the AWPS applications, but the completion dates for other long- term applications have not yet been set. The Army’s May 2002 report also did not provide milestones for completing the interface between AWPS and the Logistics Modernization Program. Instead, it simply stated that between May 2002 and February 2003 the system has to accept, and operate with, data from the Logistics Modernization Program. The original date (July 2001) set to operationalize the interface at the first site, the Tobyhanna Army Depot, had changed by about 18 months. In addition, the report noted that the Operations Support Command is scheduled to transition to the Logistics Modernization Program 1 year after the Communications and Electronics Command, which is approximately January 2004. This date is about 2 years beyond the original date of October 2000. The Army’s May 2002 report does not address in detail the extent to which AWPS is providing the Army with the capability to match manpower requirements and workload for which it was initially intended. While the report states that the implementation of AWPS in several mission areas within the Army Materiel Command has shown that the system can efficiently draw data from other existing systems and manipulate this information to link personnel needs with projected workloads, the Army has not demonstrated that AWPS has improved its ability to support its long-term forecasting of civilian personnel requirements based on projected workload. Because the Army did not provide supporting evidence for the statement in its May 2002 report that the system has led to increased operational efficiencies, the extent of the improvements is unclear. We did not independently review the effectiveness of the AWPS system at the depots we visited. The Army’s report also fails to discuss the potential overlap and duplication that exists between AWPS and the Logistics Modernization Program. Although these two systems were designed to serve different functions, Army and contractor officials point out that there is some potential overlap and redundancy in the systems’ capabilities. For example, the capability of the performance measurement and control module in the AWPS software also exists in the Logistics Modernization Program software configuration. This module allows the user to compare actual resource expenditures against production plans, scheduled workload, and related budgets for specific projects in order to determine the likelihood of completing a project within its estimated time frame and budget. In addition, because the Logistics Modernization Program is not complete, the Army cannot be certain what other capabilities may be duplicated. Army officials at the Tobyhanna Army Depot expressed concerns that the need to operate and maintain both systems could lead to higher costs and duplication of efforts. A second module in AWPS, however, the strategic planning and forecasting module, is unique to AWPS and does not currently exist within the software configuration for the Logistics Modernization Program. This module provides the user with the capability to forecast manpower and capacity requirements based on future projected workload. More specifically, this module allows the Army the ability to conduct “what if” analyses for manpower and capacity requirements based on future workload projections at each of its maintenance activities. Contractor officials stated that although this capability could be built into the Logistics Modernization Program, it would have to be modified to be compatible with the current software configuration. By incorporating this capability into the Logistics Modernization Program, the Army could eliminate the need to operate and maintain two separate systems. Computer Sciences Corporation submitted a formal proposal to the Army in August 2001 to incorporate all of the capabilities of AWPS into the Logistics Modernization Program for an estimated contract price increase of about $2 million. Contractor officials told us in May 2002, however, that because of the amount of work they have dedicated to building the interface between the two systems, this cost estimate is no longer valid. Although the Army has begun developing an interface between AWPS and the Logistics Modernization Program, it has not sufficiently tested the interface to ensure that data can be shared between the two systems and that the AWPS capability will not be adversely affected. Once the Logistics Modernization Program is implemented, the Army plans to shut down several systems, including the Standard Depot System, that currently provide data for AWPS. However, the Army has not demonstrated that the Logistics Modernization Program databases will be able to supply AWPS with the data that it needs to continue to function. Until the Army has placed the interface in operation at several sites, it will be too early to assess its effectiveness. The Army’s contract with the Computer Sciences Corporation to develop and field the Logistics Modernization Program required that the contractor would create an interface between the two systems, and this work started in 1999. In February 2002, Army and contractor officials developed an interface control document that identified the data elements that AWPS would need from the Logistics Modernization Program databases to maintain its current capabilities. Since that time, contractor personnel have been working to locate the sources within the Logistics Modernization Program databases for each data element and determine the most expedient way to move that data into AWPS. According to Army and contractor officials, about 90 percent of the data elements had been located by May 2002. While initial testing of the interface began in August 2002, it will be tested at only one of the five Army depots by February 2003 when the Logistics Modernization Program is scheduled to come on line. Specifically, the Army will be testing the interface at Tobyhanna Army Depot between August 2002 and February 2003, and expects that the interface will be fully functional by the time the Logistics Modernization Program is deployed at the depot in February 2003. Subsequently, the Army plans to install the Logistics Modernization Program and the AWPS interface at the four remaining Army maintenance depots, along with the Army’s ammunition maintenance facilities. According to the May 2002 report, the Army expects to shut down the current information systems that support AWPS at the same time as it turns on the Logistics Modernization Program. As a result, there will be no transition period during which the current information systems and the Logistics Modernization Program are in operation at the same time. The Army’s May 2002 report to Congress on the development and implementation of AWPS has a number of significant limitations. The report does not contain key information regarding the changes to the program since the submission of the May 2001 master plan, and it does not provide adequate information on the costs, schedule, and performance of the system. As a result, the report is of limited use to Congress in evaluating whether the AWPS project is still in line with its original cost, schedule, and performance objectives. The Army has not demonstrated to Congress how well the system has helped it thus far to determine future civilian workload requirements based on projected workloads. Moreover, the report does not contain the information that Congress needs to determine how much funding will be required to complete the initial implementation of the system and expand it into other functional areas. AWPS provides the Army with a capability for strategic planning and forecasting at its maintenance facilities that currently does not exist within the Logistics Modernization Program. The interface that is being developed between the two systems is intended to allow the workload and performance system to maintain its current capabilities, including its strategic planning and forecasting module. Because each system offers the Army certain unique capabilities, a rationale for operating both systems at the same time exists. However, because the two systems may develop some overlap and redundant capabilities in the future, there is some potential for increased costs or other inefficiencies. In order to improve the quality of the Army’s annual progress reports to Congress on the implementation of AWPS and to enhance the efficiency and effectiveness of the system, we recommend that the Secretary of Defense direct the Secretary of the Army to: submit to Congress annual progress reports on the implementation of AWPS that contain a complete description of any changes to the master plan since the submission of the previous report and a detailed explanation of the status of the AWPS program in relation to the costs, milestones, and performance data contained in the previous report; ensure that these progress reports contain detailed cost, schedule, and performance information to allow Congress to fully assess the status of the Army’s implementation of the workload and performance system and its interface with the Logistics Modernization Program, and the extent to which the system is providing the Army with the capability to match manpower and workload requirements; undertake a review of the interface between AWPS and the Logistics Modernization Program, once it has been successfully installed at the Army’s five maintenance depots, to ensure that it is the most efficient and cost-effective use of these two systems; and ensure that the data-sharing mechanisms between the Logistics Modernization Program and AWPS are complete and allow for full functionality of AWPS before turning off the information systems that currently support AWPS. The Department of Defense fully concurred with our finding and recommendations. In response to our recommendation that the Army ensure that future progress reports contain cost, schedule, and performance information as specified in relevant Defense regulations and other congressional guidance, DOD will implement the recommendation in its February 2003 report. However, DOD noted that the workload and performance system is not a major automated information system and, therefore, is not required to strictly adhere to the requirements of Department of Defense Regulation 5000.2-R. We agree that the workload and performance system does not meet the minimum threshold to be considered a major system. However, we believe that the parameters outlined in this regulation provide an appropriate management framework for the types of information that should be included in future progress reports. DOD also informally provided other suggested revisions to address certain technical and factual information in the text of the draft report. We reviewed these suggested revisions and made changes where appropriate. We are sending copies of this report to interested congressional committees, the Secretaries of Defense and the Army, and the Director, Office of Management and Budget. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Workload and workforce have been combined as the strategic planning and forecasting module. This phase is not included in May 2002 plan. First phase of the plan is to cover non-Army Materiel Command maintenance activities. Base operations at ammunition logistics activities is not included. This is the first phase in May 2002 plan. Second phase of the plan is expansion into nonmaintenance missions. This phase is categorized into three parts. Does not include any information on where mission indirect will be installed. Does not include any information on the status of the Next generation-AWPS. Does not include any information on the status of the Base operations-Next generation. . No date provided No date provided No date provided No date provided No date provided Installed at maintenance depots and will be installed at other sites over the course of the year. Completion February 2003 at Tobyhanna and other commodity commands at about 3-month intervals. As of October 2001, AWPS has been operational at all five maintenance depots, ammunition logistics, ammunition manufacturing (Crane and McAlester), and base operations at all maintenance depots. To determine whether the Army’s May 2002 master plan contains adequate information to assess the Army’s progress in implementing AWPS, we reviewed the Army’s May 2001 and May 2002 master plans. We compared the contents of these plans to the key requirements set forth in section 346 of the National Defense Authorization Act for Fiscal Year 2002. In addition, we reviewed the May 2002 master plan to determine the extent to which it addressed the recommendations outlined in our November 1999 report.We also examined the Department of Defense’s regulation outlining the mandatory procedures for the acquisition of major automated information systems to determine specific criteria required for a progress report. We compared the contents of the May 2002 master plan to the criteria outlined in this regulation. Although this regulation does not specifically apply to the development of the AWPS system, we believe that sound management practices support the need to address these parameters in the Army’s progress reports. We also met with officials at the Headquarters, Department of the Army; Headquarters, Army Material Command; and the Operations Support Command in Rock Island, Illinois, to discuss the development and implementation of the AWPS system. In addition, we discussed the benefits and problems that the depots have experienced with AWPS with officials at Tobyhanna Army Depot, Tobyhanna, Pennsylvania; and Corpus Christi Army Depot, Corpus Christi, Texas. We did not, however, independently review the effectiveness of the AWPS system at the depots we visited. Lastly, we relied on prior work done in connection with the implementation of AWPS. To identify the measures the Army has taken to ensure appropriate coordination and data sharing between AWPS and the Logistics Modernization Program, we reviewed the February 2002 Interface Control Document developed jointly by the Department of the Army and the Computer Sciences Corporation, and discussed the related interface initiatives with appropriate Army and contractor officials. We also reviewed the actions the Army had taken to facilitate the interface and data sharing between the two systems to identify what additional actions were needed before the Army could be assured that the AWPS system would remain fully operational during the transition period. Specifically, we met with officials at the Headquarters, Department of the Army; Headquarters, Army Materiel Command; the Army’s Operations Support Command in Rock Island, Illinois; the Logistics Modernization Project Office in Moorestown, New Jersey; and Tobyhanna Army Depot and Corpus Christi Army Depot. Because the interface between the two systems is still being developed and has not been fully tested, we were unable to assess its effectiveness. We conducted our review between March 2002 and August 2002 in accordance with generally accepted government auditing standards. | At the direction of the House Committee on National Security, the Army began developing the Army Workload and Performance System (AWPS) in 1996. This automated system was intended to address a number of specific weaknesses highlighted in several GAO and Army studies since 1994 regarding the Army's inability to support its civilian personnel requirements by using an analytically based workload forecasting system. Army's May 2002 report on AWPS does not provide Congress with adequate information to assess the Army's progress in implementing the system. Specifically, the 2002 plan does not include (1) a detailed summary of all costs that the Army has incurred, or the expenditures that it anticipates in the future, to develop and implement the system; (2) a list of the milestones that the Army has, or has not, achieved in the previous year and a list of milestones that are projected for the future; and (3) an evaluation of how well the system has performed to date in fulfilling its primary function--that is, of matching manpower needs with depot workloads. Although the Army has begun developing an interface between AWPS and the Logistics Modernization Program, it has not sufficiently tested the interface to ensure that data can be shared between the two systems and that the capability of the workload and performance system will not be adversely affected. |
Factors, such as shortcomings in BIA’s management and additional factors generally outside of BIA’s management responsibilities—such as a complex regulatory framework, tribes’ limited capital and infrastructure, and varied tribal capacity—have hindered Indian energy development. Specifically, according to some of the literature we reviewed and several stakeholders we interviewed, BIA’s management has three key shortcomings. First, BIA does not have the data it needs to verify ownership of some oil and gas resources, easily identify resources available for lease, or easily identify where leases are in effect, inconsistent with Interior’s Secretarial Order 3215, which calls for the agency to maintain a system of records that identifies the location and value of Indian resources and allows for resource owners to obtain information regarding their assets in a timely manner. The ability to account for Indian resources would assist BIA in fulfilling its federal trust responsibility, and determining ownership is a necessary step for BIA to approve leases and other energy-related documents. However, in some cases, BIA cannot verify ownership because federal cadastral surveys—the means by which land is defined, divided, traced, and recorded—cannot be found or are outdated. It is additionally a concern that BIA does not know the magnitude of its cadastral survey needs or what resources would be needed to address them. We recommended in our June 2015 report that the Secretary of the Interior direct the Director of the BIA to take steps to work with BLM to identify cadastral survey needs. In its written comments on our report, Interior did not concur with our recommendation. However, in an August 2015 letter to GAO after the report was issued, Interior stated that it agrees this is an urgent need and reported it has taken steps to enter into an agreement with BLM to identify survey-related products and services needed to identify and address realty and boundary issues. In addition, the agency stated in its letter that it will finalize a data collection methodology to assess cadastral survey needs by October 2016. In addition, BIA does not have an inventory of Indian resources in a format that is readily available, such as a geographic information system (GIS). Interior guidance identifies that efficient management of oil and gas resources relies, in part, on GIS mapping technology because it allows managers to easily identify resources available for lease and where leases are in effect. According to a BIA official, without a GIS component, identifying transactions such as leases and ROW agreements for Indian land and resources requires a search of paper records stored in multiple locations, which can take significant time and staff resources. For example, in response to a request from a tribal member with ownership interests in a parcel of land, BIA responded that locating the information on existing leases and ROW agreements would require that the tribal member pay $1,422 to cover approximately 48 hours of staff research time and associated costs. In addition, officials from a few Indian tribes told us that they cannot pursue development opportunities because BIA cannot provide the tribe with data on the location of their oil and gas resources—as called for in Interior’s Secretarial Order 3215. Further, in 2012, a report from the Board of Governors of the Federal Reserve System found that an inventory of Indian resources could provide a road map for expanding development opportunities. Without data to verify ownership and use of resources in a timely manner, the agency cannot ensure that Indian resources are properly accounted for or that Indian tribes and their members are able to take full advantage of development opportunities. To improve BIA’s efforts to verify ownership in a timely manner and identify resources available for development, we recommended in our June 2015 report that Interior direct BIA to take steps to complete GIS mapping capabilities. In its written comments in response to our report, Interior stated that the agency is developing and implementing applications that will supplement the data it has and provide GIS mapping capabilities, although it noted that one of these applications, the National Indian Oil and Gas Evaluation Management System (NIOGEMS), is not available nationally. Interior stated in its August 2015 letter to GAO that a national dataset composed of all Indian land tracts and boundaries with visualization functionality is expected to be completed within 4 years, depending on budget and resource availability. Second, BIA’s review and approval is required throughout the development process, including the approval of leases, ROW agreements, and appraisals, but BIA does not have a documented process or the data needed to track its review and response times. In 2014, an interagency steering committee that included Interior identified best practices to modernize federal decision-making processes through improved efficiency and transparency. The committee determined that federal agencies reviewing permits and other applications should collect consistent data, including the date the application was received, the date the application was considered complete by the agency, the issuance date, and the start and end dates for any “pauses” in the review process. The committee concluded that these dates could provide agencies with greater transparency into the process, assist agency efforts to identify process trends and drivers that influence the review process, and inform agency discussions on ways to improve the process. However, BIA does not collect the data the interagency steering committee identified as needed to ensure transparency and, therefore, it cannot provide reasonable assurance that its process is efficient. A few stakeholders we interviewed and some literature we reviewed stated that BIA’s review and approval process can be lengthy. For example, stakeholders provided examples of lease and ROW applications that were under review for multiple years. Specifically, in 2014, the Acting Chairman for the Southern Ute Indian Tribe testified before this committee that BIA’s review of some of its energy-related documents took as long as 8 years. In the meantime, the tribe estimates it lost more than $95 million in revenues it could have earned from tribal permitting fees, oil and gas severance taxes, and royalties. According to a few stakeholders and some literature we reviewed, the lengthy review process can increase development costs and project times and, in some cases, result in missed development opportunities and lost revenue. Without a documented process or the data needed to track its review and response times, BIA cannot ensure transparency into the process and that documents are moving forward in a timely manner, or determine the effectiveness of efforts to improve the process. To address this shortcoming, we recommended in our June 2015 report that Interior direct BIA to develop a documented process to track its review and response times and enhance data collection efforts to ensure that the agency has the data needed to track its review and response times. In its written comments, Interior did not fully concur with this recommendation. Specifically, Interior stated that it will use NIOGEMS to assist in tracking review and response times. However, this application does not track all realty transactions or processes and has not been deployed nationally. Therefore, while NIOGEMS may provide some assistance to the agency, it alone cannot ensure that BIA’s process to review energy-related documents is transparent or that documents are moving forward in a timely manner. In its August 2015 letter to GAO, Interior stated it will try to implement a tracking and monitoring effort by the end of fiscal year 2017 for oil and gas leases on Indian lands. The agency did not indicate if it intends to improve the transparency of its review and approval process for other energy-related documents, such as ROW agreements and surface leases—some of which were under review for multiple years. Third, some BIA regional and agency offices do not have staff with the skills needed to effectively evaluate energy-related documents or adequate staff resources, according to a few stakeholders we interviewed and some of the literature we reviewed. For instance, Interior officials told us that the number of BIA personnel trained in oil and gas development is not sufficient to meet the demands of increased development. In another example, a BIA official from an agency office told us that leases and other permits cannot be reviewed in a timely manner because the office does not have enough staff to conduct the reviews. We are conducting ongoing work for this committee that will include information on key skills and staff resources at BIA involved with the development of energy resources on Indian lands. According to stakeholders we interviewed and literature we reviewed, additional factors, generally outside of BIA’s management responsibilities, have also hindered Indian energy development, including a complex regulatory framework consisting of multiple jurisdictions that can involve significantly more steps than the development of private and state resources, increase development costs, and add to the timeline for development; fractionated, or highly divided, land and mineral ownership interests; tribes’ limited access to initial capital to start projects and limited opportunities to take advantage of federal tax credits; dual taxation of resources by states and tribes that does not occur on private, state, or federally owned resources; perceived or real concerns about the political stability and capacity of some tribal governments; and limited access to infrastructure, such as transmission lines needed to carry power generated from renewable sources to market and transportation linkages to transport oil and gas resources to processing facilities. A variety of factors have deterred tribes from pursuing TERAs. Uncertainty associated with Interior’s TERA regulations is one factor. For example, TERA regulations authorize tribes to assume responsibility for energy development activities that are not “inherently federal functions,” but Interior officials told us that the agency has not determined what activities would be considered inherently federal because doing so could have far-reaching implications throughout the federal government. According to officials from one tribe we interviewed, the tribe has repeatedly asked Interior for additional guidance on the activities that would be considered inherently federal functions under the regulations. According to the tribal officials, without additional guidance on inherently federal functions, tribes considering a TERA do not know what activities the tribe would be assuming or what efforts may be necessary to build the capacity needed to assume those activities. We recommended in our June 2015 report that Interior provide additional energy development-specific guidance on provisions of TERA regulations that tribes have identified as unclear. Additional guidance could include examples of activities that are not inherently federal in the energy development context, which could assist tribes in identifying capacity building efforts that may be needed. Interior agreed with the recommendation and stated it is considering further guidance but did not provide additional details regarding issuance of the guidance. In addition, the costs associated with assuming activities currently conducted by federal agencies and a complex application process were identified by literature we reviewed and stakeholders we interviewed as other factors that have deterred any tribe from entering into a TERA with Interior. Specifically, through a TERA, a tribe assuming control for energy development activities that are currently conducted by federal agencies does not receive federal funding for taking over the activities from the federal government. Several tribal officials we interviewed told us that the tribe does not have the resources to assume additional responsibility and liability from the federal government without some associated support from the federal government. In conclusion, our review identified a number of areas in which BIA could improve its management of Indian energy resources and enhance opportunities for greater tribal control and decision-making authority over the development of their energy resources. Interior stated it intends to take some steps to implement our recommendations, but we believe Interior needs to take additional actions to address data limitations and track its review process. We look forward to continuing to work with this committee in overseeing BIA and other federal programs to ensure that they are operating in the most effective and efficient manner. Chairman Barrasso, Ranking Member Tester, and Members of the Committee, this concludes my prepared statement. I would be pleased to answer any questions that you may have at this time. If you or your staff members have any questions about this testimony, please contact me at (202) 512-3841 or ruscof@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Christine Kehr (Assistant Director), Alison O’Neill, Jay Spaan, and Barbara Timmerman made key contributions to this testimony. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Indian energy resources hold significant potential for development, but according to a 2014 Interior document, these resources are underdeveloped relative to surrounding non-Indian resources. Development of Indian energy resources is a complex process that may involve federal, tribal, and state agencies. Interior's BIA has primary authority for managing Indian energy development and generally holds final decision-making authority for leases and other permits required for development. The Energy Policy Act of 2005 provided the opportunity for interested tribes to pursue TERAs—agreements between a tribe and Interior that allow the tribe to enter into energy leases and agreements without review and approval by Interior. However, no tribe has entered into a TERA. This testimony highlights the key findings of GAO's June 2015 report (GAO-15-502). It focuses on factors that have (1) hindered Indian energy development and (2) deterred tribes from pursuing TERAs. For the June 2015 report, GAO analyzed federal data; reviewed federal, academic, and other literature; and interviewed tribal, federal and industry stakeholders. In its June 2015 report, GAO found that Bureau of Indian Affairs' (BIA) management shortcomings and other factors—such as a complex regulatory framework, limited capital and infrastructure, and varied tribal capacity—have hindered Indian energy development. Specifically, BIA's management has the following shortcomings: BIA does not have the data it needs to verify ownership of some Indian oil and gas resources, easily identify resources available for lease, or identify where leases are in effect, as called for in Secretarial Order 3215. GAO recommended that Interior direct BIA to identify land survey needs and enhance mapping capabilities. In response, Interior stated it will develop a data collection tool to identify the extent of the survey needs in fiscal year 2016, and enhance mapping capabilities by developing a national dataset composed of all Indian land tracts and boundaries in the next 4 years. BIA's review and approval is required throughout the development process, but BIA does not have a documented process or the data needed to track its review and response times, as called for in implementation guidance for Executive Order 13604. According to a tribal official, BIA's review of some of its energy-related documents, which can include leases, right-of-way agreements, and appraisals, took as long as 8 years. In the meantime, the tribe estimates it lost more than $95 million in revenues it could have earned from tribal permitting fees, oil and gas severance taxes, and royalties. GAO recommended that Interior direct BIA to develop a documented process to track its review and response times. In response, Interior stated it will try to implement a tracking and monitoring mechanism by the end of fiscal year 2017 for oil and gas leases. However, it did not indicate whether it intends to track and monitor the review of other energy-related documents that must be approved before development can occur. Without comprehensive tracking and monitoring of its review process, it cannot ensure that documents are moving forward in a timely manner, and lengthy review times may continue to contribute to lost revenue and missed development opportunities. Some BIA regional and agency offices do not have staff with the skills needed to effectively evaluate energy-related documents or adequate staff resources. GAO is conducting ongoing work on this issue. GAO also found in its June 2015 report that a variety of factors have deterred tribes from seeking tribal energy resource agreements (TERA). These factors include uncertainty about some TERA regulations, costs associated with assuming activities historically conducted by federal agencies, and a complex application process. For instance, one tribe asked Interior for additional guidance on the activities that would be considered inherently federal functions—a provision included in Interior's regulations implementing TERA. Interior did not provide the clarification requested. Therefore, the tribe had no way of knowing what efforts may be necessary to build the capacity needed to assume those activities. GAO recommended that Interior provide clarifying guidance. In response, Interior officials stated that the agency is considering further guidance, but it did not provide a timeframe for issuance. In its June 2015 report, GAO recommended that Interior take steps to address data limitations, track its review process, and provide clarifying guidance. In an August 2015 letter to GAO after the issuance of the report, Interior generally agreed with the recommendations and identified some steps it intends to take to implement them. |
The United States prefers to conduct operations as part of a coalition when possible. In prosecuting the Global War on Terrorism, the United States, through the U. S. Central Command (CENTCOM), has acted in concert with a number of other countries as part of a coalition to conduct Operation Enduring Freedom in Afghanistan and Operation Iraqi Freedom in Iraq. Most of these countries have sent officers to CENTCOM headquarters—located at MacDill Air Force Base in Tampa, Florida—to act as liaisons between their countries and CENTCOM commanders and assist in planning and other operational tasks. As coalition liaison officers began arriving to assist in Operation Enduring Freedom, CENTCOM officials established a secure area with trailers outfitted as offices for the officers to use. As the coalition expanded and Operation Iraqi Freedom started, the number of liaison officers grew, as did the need for more trailers and administrative support. CENTCOM officials initially paid for the support from Combatant Commander’s Initiative Funds earmarked for short-term initiatives identified by the commander. However, as the coalitions for both operations grew and were expected to continue into fiscal year 2003, CENTCOM requested that Congress allow the command to use funds from its budget to pay for the support provided to the liaison officers. Congress responded in the fiscal year 2003 National Defense Authorization Act by authorizing the Secretary of Defense to provide administrative services and support to those liaison officers of countries involved in a coalition with the United States and to pay the travel, subsistence, and personal expenses of those liaison officers from developing countries. This legislation expires September 30, 2005. The legislation does not direct us to assess whether it should be renewed and we did not do so. Although it is the responsibility of the Secretary of Defense to formulate general defense policy and policy related to all matters of direct and primary concern to DOD, we could find no evidence of guidance issued by DOD to combatant commanders on how to implement the legislation allowing DOD to provide support to coalition liaison officers. Also, we could not identify any office within DOD that has responsibility for implementing the legislation and, therefore, may have promulgated guidance on the legislation. Guidance for issues that affect all the components originates at the DOD level. Typically, DOD will issue a directive—a broad policy document containing what is required to initiate, govern, or regulate actions or conduct by DOD components. This directive establishes a baseline policy that applies across the combatant commands, services, and DOD agencies. DOD may also issue an instruction, which implements the policy or prescribes the manner or a specific plan or action for carrying out the policy, operating a program or activity, and assigning responsibilities. In our opinion, this guidance is important for consistent implementation of a program across DOD. To determine what guidance has been provided to the commands, we contacted offices within DOD, the Office of the Secretary of Defense, and the Joint Staff to determine which office has responsibility for implementing this legislation. After calls to the Offices of Legislative Affairs and Comptroller within the Office of the Secretary of Defense, as well as the Joint Staff’s Plans and Policy Directorate and Comptroller, neither we nor the DOD Inspector General, our focal point within DOD, were able to locate any office having this responsibility. In the data collection instrument we sent to the combatant commands, we asked whether the commands had received any guidance on how to implement the legislation. All commands replied that they had received no guidance from any office within DOD. Although the legislation was inspired by the needs of the coalition assembled for the Global War on Terrorism, its authority is available through the Secretary of Defense to all combatant commanders. However, according to the results of our research, the awareness of and need to use the legislation by combatant commands vary widely. To determine the extent to which the combatant commands are aware of and using this legislation, we created a data collection instrument and e-mailed it to representatives at each combatant command. In responding to this instrument, representatives from Northern Command, Southern Command, European Command, Transportation Command, and Strategic Command stated that they were not aware of nor did they have a need to use the legislation, while representatives of Joint Forces Command, Special Operations Command, and Pacific Command were aware of, but had no need to use, the legislation. CENTCOM and one of its subordinate commands were the only commands both aware of and using the legislation. CENTCOM is providing administrative services and support to more than 300 foreign coalition liaison officers from over 60 countries fighting the Global War on Terrorism with the United States. In addition, CENTCOM is paying travel, subsistence, and personal expenses to over 70 liaison officers from more than 30 developing countries that are included in the larger number. In the absence of guidance from the Office of the Secretary of Defense or the Joint Staff, CENTCOM officials established internal operating procedures to provide the administrative and travel related support that the foreign coalition liaison officers needed. These procedures are not written, but they are based on existing criteria defining developing countries, federal regulations governing travel, economies of scale, and what appears to be prudent fiscal management. In providing administrative services and support, CENTCOM officials determined that each country’s delegation (limited to no more than five foreign coalition liaison officers) would be provided a trailer for office space with furniture, telephone, computer, printer, copier, and shredder. Some of the smaller delegations share office space. CENTCOM pays for the furniture, shredders, copiers, telephones, and part of the custodial expense. MacDill Air Force Base, which is host to CENTCOM, pays for trailer leases, utilities, external security, and part of the custodial expense. These trailers are located on MacDill property in a fenced compound with security guards on duty. We toured some of the trailers and determined that CENTCOM was providing the space and equipment typical of a small office for the coalition officers. However, CENTCOM officials told us that some countries have spent their own funds to upgrade the office space provided. In determining how to pay the travel, subsistence, and personal expenses for coalition liaison officers from developing countries, CENTCOM officials told us they used existing criteria and federal regulations to guide their decisions. Absent a DOD or Department of State list of what would be considered developing countries, CENTCOM officials told us they use a list of countries generated by the Organization of Economic Cooperation and Development, an international organization to which the United States belongs, and defined by that organization as “Least Developed: Other Low Income and Lower Middle Income.” According to the officials, this list is recognized by the Joint Staff. To determine the appropriate amounts to provide for travel, subsistence, and personal expenses, CENTCOM officials use the Joint Federal Travel Regulations. CENTCOM officials established some basic standards for authorizing travel, subsistence, and personal expenses for the coalition liaison officers from developing countries. CENTCOM pays for one round-trip airplane ticket from an officer’s country of origin to Tampa, Florida, where CENTCOM is headquartered, and return during a tour of duty. Other trips home are at an officer’s or his or her country’s expense. Meals and incidental expenses are based on the Joint Federal Travel Regulations’ rate for Tampa ($42 per day in fiscal year 2003) paid monthly based on the number of days the officer actually spends in Tampa. CENTCOM provides housing for foreign coalition liaison officers through contracts it has negotiated with gated apartment complexes offering on-site security. Because of the number of officers needing housing (including those officers not from developing countries, who pay for their own housing), CENTCOM officials told us that they were able to negotiate rates for housing between $58 and $65 per day, which are less than Joint Federal Travel Regulations’ per diem rate for the Tampa area ($93 per day in fiscal year 2003). CENTCOM does not pay any expenses incurred for family members of the coalition liaison officer who might accompany the officer to the United States. In fiscal year 2002, the first year the coalition was formed, coalition liaison officers had to find their own housing, which was more expensive than the contracts currently in place. CENTCOM officials also told us that they rent cars for the coalition liaison officers from the General Services Administration at a cost of $350 per car per month, which is less expensive than renting from a commercial car leasing company at a cost of $750 per month. Again, because there are so many officers who require transportation, CENTCOM was able to negotiate a lower rate. Officers are allowed one car for each three members of a delegation. The officer whose name is on the car rental agreement is allowed $60 per month for gas. The officers assigned to the car must pay for any additional gas. CENTCOM and MacDill Air Force Base spent a total of almost $30 million between fiscal year 2002 and 2003 to support coalition liaison officers (see table 1). In fiscal year 2002, CENTCOM and MacDill Air Force Base spent $12.4 million to provide the administrative services and support and pay travel, subsistence, and personal expenses for the coalition liaison officers assigned to CENTCOM headquarters. The money came from Combatant Commander’s Initiative Funds and MacDill Air Force Base funds. The amount spent in fiscal year 2003—nearly $17.1 million—included $898,000 in Commander’s Initiative Funds to pay for travel, subsistence, and personal expenses, which was used until the legislation to provide support to coalition liaison officers was passed and the funds became available. The remaining amount came from CENTCOM and MacDill funds. In addition to CENTCOM, the Coalition Joint Task Force-Horn of Africa, a CENTCOM subordinate operating command, reported spending over $300,000 to provide administrative support and pay travel, subsistence, and personal expenses to 13 liaison officers assigned to the task force headquarters. No other subordinate operating command or component command reported spending funds to support coalition liaison officers. CENTCOM officials stated that this legislation has benefited the coalition by providing maximum communication and coordination for the deployment of those forces committed to fighting the Global War on Terrorism. They also stated that without the presence of the liaison officers at CENTCOM, they could not accomplish the coalition integration planning and coordination important to the Global War on Terrorism as effectively or efficiently as they are doing. CENTCOM officials stated that the legislation’s authority to pay for travel, subsistence, and personal expenses for developing countries’ liaison officers also has given the command a tool to use in negotiating with developing countries for their participation in the coalition force. DOD-wide guidance provides uniform direction throughout the department on how to implement programs and policies. While CENTCOM has developed procedures for managing support to coalition liaison officers and has taken steps to provide the support authorized by the legislation in the least costly way, in the absence of DOD-wide guidance, there can be no assurance that prudent procedures will always be followed. Moreover, without DOD guidance, should other commands choose to use the authority granted by this legislation, there is no assurance that they will implement it in a uniform and prudent manner. As of January 2004, there was no DOD office responsible for the implementation of the legislative authority allowing commands to pay for support for coalition liaison officers and no DOD-wide guidance on its use. We recommend that the Secretary of Defense take the following two actions: (1) designate an office within DOD to take responsibility for this legislation and (2) direct this designated office to promulgate and issue guidance to the combatant commands and their component and subordinate commands on how to implement this legislation. In official oral comments on a draft of this report, DOD concurred with the report. DOD stated that it would designate the Joint Staff as the office responsible for implementing the legislation and issuing appropriate guidance. We are sending copies of this report to interested congressional committees; the Secretary of Defense; and the Director, Office of Management and Budget. We will also make copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions, please contact me on (757) 552-8100 or by e-mail at curtinn@gao.gov. Major contributors to this report were Steven Sternlieb, Ann Borseth, Madelon Savaides, David Mayfield, and Renee McElveen. | In the National Defense Authorization Act for Fiscal Year 2003, Congress authorized the Secretary of Defense to provide administrative services and support to foreign coalition liaison officers temporarily assigned to the headquarters of a combatant command or any of its subordinate commands. Congress required GAO to assess the implementation of this legislation. Specifically, GAO's objectives were to determine (1) what guidance the Department of Defense (DOD) has provided on the implementation of this legislation, (2) the extent to which the commands are aware of and are using this legislation, and (3) the level of support being provided by commands using this legislation and the benefits derived from it. GAO could find no evidence that DOD had issued any guidance to combatant commanders on how to implement this legislation. In addition, GAO was unable to identify an office within DOD that has responsibility for implementing this legislation. The DOD Office of the Inspector General, as GAO's focal point within DOD, was also unable to identify a responsible office. Although the legislation was inspired by the needs of the coalition assembled for the Global War on Terrorism, its authority is available through the Secretary of Defense to all combatant commanders. According to the results of GAO's research, the combatant commands' awareness of and need to use the legislation varied widely with Central Command being the only command using the authority to support liaison officers. Central Command, spent $17 million in fiscal year 2003 to provide administrative services and support to more than 300 coalition liaison officers from over 60 countries. As allowed by the legislation, the command also paid the travel, subsistence, and personal expenses of over 70 of these officers from more than 30 developing countries. Central Command officials stated that they could not accomplish the coalition integration planning and coordination important to the Global War on Terrorism as effectively or efficiently as they are doing without the liaison officers. They also commented that the legislation helps facilitate the participation of a developing country in the coalition if the command can pay for travel and subsistence. |
To provide an update on the general status of the scientific research on mobile phone health effects, we reviewed reports by organizations such as FDA, the National Institutes of Health (NIH), the World Health Organization, and by expert panels convened by the governments of the United Kingdom, Canada, and Australia that have reviewed and assessed the peer-reviewed literature on the subject. We also interviewed representatives of these organizations, as well as other scientists prominent in the field of radiofrequency energy health effects in government, industry, and academia. To determine the federal government’s role in sponsoring, conducting, or overseeing research on mobile phone health effects, we gathered information from federal agencies—including the Air Force, Army, EPA, the Federal Communications Commission (FCC), FDA, National Cancer Institute, National Institute of Environmental Health Sciences, National Institute for Occupational Safety and Health, National Institute of Standards and Technology, National Science Foundation, Navy, and Occupational Safety and Health Administration—on their activities, if any, in this area. To describe and assess the cooperative research agreement between FDA and CTIA, we conducted interviews with officials of these two organizations. We also reviewed and analyzed documents related to the agreement, including the agreement itself, FDA’s working group meetings, and CTIA’s request for research proposals. We also discussed the cooperative research agreement with parties outside of FDA and CTIA, including officials at several federal agencies and individual mobile phone manufacturers, as well as independent research scientists and public interest groups. This research agreement follows up on an earlier industry- funded, 5-year research effort run by Wireless Technology Research (WTR). We spoke to the former chairman of WTR, as well as several members of its peer review board and some of the research scientists that WTR funded. We also reviewed documentation related to WTR, including reports that it published, as well as correspondence between WTR and federal agencies, its peer review board, and other parties. To evaluate issues related to standard setting, testing, and public information, we reviewed federal laws and regulations related to radiofrequency energy and safety standards for mobile phones. We also met with officials at FCC, FDA, EPA, and other agencies to discuss their regulatory roles and activities, and with industry representatives to discuss their views and activities. To gain greater context on all of the objectives, we also interviewed representatives of nonindustry, nongovernment organizations with an interest in mobile phone safety, including consumer groups, advocates, and labor unions. Our review focused on health issues related to the radiofrequency energy emitted from handheld mobile phones. It did not include issues related to emissions from network base stations, the potential effects of mobile phone emissions on medical devices, or on safety issues related to using a mobile phone while driving. We performed our review from July 2000 through April 2001 in accordance with generally accepted government auditing standards. The United States has experienced a dramatic growth in the number of wireless telephone subscribers since nationwide cellular service became available in the mid-1980s. In 1994, 16 million Americans were subscribers. By 2001, subscribership had reached an estimated 110 million (see fig. 1) and is projected to have strong growth for the foreseeable future. Growth has been strong in other countries as well, with some experts projecting that worldwide subscribership will reach about 1.2 billion by 2005. In countries such as Austria, Finland, Italy, Norway, South Korea, and Sweden, more than half the population are already subscribers. The pocket-sized mobile phone in common use today is a low-powered radio transceiver (a combination transmitter and receiver) that uses radio waves to communicate with fixed installations, called base stations or cell towers. The base stations are networked to a central switching station that directs a mobile phone call to the desired location, whether that is another mobile phone or a traditional landline phone. The radio waves used by mobile phones are a form electromagnetic radiation—a series of waves of electric and magnetic energy that move together through space. The spectrum of electromagnetic radiation comprises a range of frequencies from very-low-frequency energy (such as electrical power), through visible light, to extremely high-frequency radiation (such as X-rays and gamma rays), as shown in figure 2. The portion of the electromagnetic spectrum used by mobile phones—as well as other telecommunications services, such as radio and television broadcasting—is generally referred to as the “radiofrequency spectrum.” Frequencies in this part of the spectrum are also used for some noncommunications applications, such as microwave ovens and radar. As figure 2 shows, the electromagnetic spectrum includes ionizing and non-ionizing radiation. Ionizing radiation, such as X-rays and gamma rays, has energy levels high enough to strip electrons from atoms and molecules. Exposure to ionizing radiation can cause serious biological damage, including the production of cancers. Radiofrequencies, on the other hand, are in the “non-ionizing” portion of the electromagnetic spectrum, which lacks the energy needed to cause ionization. However, radiofrequency energy can produce other types of biological effects. For example, it has been known for many years that exposure to high levels of radiofrequency energy, particularly at microwave frequencies, can rapidly heat biological tissue. This heating (“thermal” effect) can cause harm by increasing body temperature, disrupting behavior, and damaging biological tissue. The heating effect can also be usefully harnessed for household and industrial applications, such as cooking food and molding plastics. Mobile phones are designed to operate at power levels well below the threshold for known thermal effects. A mobile phone is designed to operate at a maximum power level of 0.6 watts—less than the amount of power needed to light a flashlight bulb—and generally uses less than maximum power when operating close to a base station. By contrast, household microwave ovens use between 600 and 1,100 watts of power. head absorbs some radiofrequency energy when the phone is held to the ear during a call. The mobile phone health issue came to national attention in 1993 after a lawsuit was brought against some mobile phone companies by a Florida man claiming that his wife’s use of a mobile phone caused her brain cancer. The industry has prevailed in this and other suits that have been brought. Recently, a number of new lawsuits have been filed. World Health Organization, “Electromagnetic Fields and Public Health: Mobile Telephones and Their Base Stations,” Fact Sheet No. 193 (2000). Independent Expert Group on Mobile Phones, “Mobile Phones and Health,” National Radiological Protection Board (UK) (Apr. 1999). Royal Society of Canada, “A Review of the Potential Health Risks of Radiofrequency Fields From Wireless Telecommunication Devices.” Expert panel report prepared for Health Canada (1999). effect on human health, some studies that have suggested the existence of biological effects require further investigation. A number of factors makes it difficult to draw definitive conclusions from the existing research about the potential health effects of mobile phones. A relatively large body of research exists on the health effects of radiofrequency energy in general, but most of this research has focused on short-term exposure of the entire body, not on the longer-term exposure of the head that is characteristic of mobile phone use. In addition, much of the research to date has investigated the health effects of emissions at frequencies different from those used by mobile phones; it is not clear how possible health effects found at one frequency on the radiofrequency spectrum apply to other frequencies on the spectrum. Furthermore, much of the research focusing on mobile phones has tested the emissions of analog phones rather than of digital phones, which are rapidly becoming the standard technology. A few researchers have hypothesized that digital phones, which transmit messages as discontinuous pulses, could have different biological effects from analog phones, which transmit messages using a continuously varying radio wave. However, according to FDA, at this point the available scientific literature does not demonstrate convincingly that the biological effects of radiofrequency exposure differ based on specific frequency, or on whether the signal is analog or digital. Two major categories of studies are used by scientists to assess whether mobile phones present a health risk: epidemiological studies and laboratory studies. Epidemiological studies, sometimes called human health studies, investigate the associations between health effects and the characteristics of people and their environment. Laboratory studies, which can include studies on animals, biological tissue samples, isolated cells, or human volunteers, are used to try to determine a causal relationship between a risk factor and human health, and the mechanism through which that relationship occurs. In 1996, the World Health Organization, an agency of the United Nations, established the International Electromagnetic Fields Project, which seeks to assess the health and environmental effects of exposure to electric and magnetic fields, including radiofrequency fields emitted by mobile phones. The agency notes that because the number of people using mobile phones has grown so large, even small adverse effects on health could have major public health implications. The goals of the project include coordinating international research efforts in the area, assessing the scientific literature, and identifying gaps in knowledge needing further research. In 1998, the project developed an agenda for research priorities on the health effects of electromagnetic fields. This agenda was developed in collaboration with a number of international organizations, such as the United Nations Environment Programme and the European Commission, as well as independent scientific institutions in several countries. FDA officials told us that they participated heavily in the development of this research agenda and that they concur with it. Among the research priorities identified were (1) additional large-scale animal studies that test the effect of long-term exposure to radiofrequency energy; (2) studies that test health effects other than cancer, such as memory loss and effects on the eye or inner ear; and (3) at least two additional large-scale epidemiological studies of people exposed to radiofrequency energy, including mobile phone users. Officials at the World Health Organization and FDA told us that most of these research needs are being addressed by ongoing or planned studies in countries around the world. Because of the nature of many of these studies, however, it may be several years before results are reported. Highlights of efforts currently under way or planned include the following. The International Agency for Research on Cancer, a part of the World Health Organization, is coordinating a series of large epidemiological studies looking at whether there is an association between mobile phone use and brain cancer. At least 13 countries are participating in the studies, with results expected in 2004. The European Commission, under its research program known as the Fifth Framework Programme, is sponsoring a number of studies on the health effects of mobile phone emissions that are being funded primarily by the European Commission and the mobile phone industry. The planned research includes large-scale animal studies designed to follow up on prior research. FDA and CTIA have begun a cooperative research effort, discussed below, that initially is focusing on two areas: (1) following up on the previously cited micronucleus assay that found changes in the genetic material of blood cells exposed to radiofrequency energy and (2) epidemiological studies. The National Toxicology Program, an interagency program headquartered at NIH, began planning in 2000 a series of long-term animal studies looking at the effect of long-term exposure to the radiofrequency emissions of mobile phones. Officials at the program are determining how their efforts should be coordinated with the European Commission’s planned animal studies. The United Kingdom’s Department of Health announced in December 2000 a research program of up to $10 million on the possible health effects of mobile phone emissions. While the specific areas of research to be conducted are still under review, one strong area of focus is expected to be noncancer effects, such as effects on brain function. In addition to these efforts, there are various other government-supported national research programs on mobile phone health issues, including programs in Australia, Finland, France, Germany, Italy, Japan, and Sweden. Most of these programs are being coordinated with, or are being conducted in collaboration with, the programs of the World Health Organization and/or the European Commission. Many of the initiatives in mobile phone research are funded through a combination of government and industry money. For example, mobile phone research being done under the Fifth Framework Programme is being financed 40 percent by the European Commission and 60 percent by the mobile phone industry. Similarly, the United Kingdom’s effort is being financed half by the government and half by the industry. Much of the industry funding is done through the GSM Association, which represents the wireless communications industry, and the Mobile Manufacturers Forum, an international consortium of mobile phone manufacturers that funds and coordinates research efforts on the public health effects of mobile phones and base stations. In addition, some individual mobile phone manufacturing companies conduct their own internal research programs. For example, Motorola has an in-house staff of five scientists and engineers that researches radiofrequency exposure issues as they relate to public health. Motorola also contracts out about $1 million a year on biological research related to radiofrequency energy. The U.S. government supports some research on the health effects of mobile phone radiofrequency emissions; overall, this represents a small portion of the research being done in the area worldwide. At present, only one agency, NIH, is providing significant funding for research related directly to the health effects of mobile phone emissions. Other agencies, such as FDA, are providing technical and scientific support to research efforts funded by the mobile phone industry, international organizations, and others. In addition to its cooperative research and development agreement with CTIA, FDA is also an active participant in the World Health Organization effort. For example, an FDA official is serving as an external scientific adviser to the mobile phone research activities being conducted under the European Commission’s Fifth Framework Programme. Depending on what tests it chooses to conduct, NIH’s National Toxicology Program may spend as much as $10 million over several years on its long- term animal tests of mobile phone radiofrequency exposure. The National Toxicology Program is an interagency program headquartered at NIH’s National Institute of Environmental Health Sciences that routinely solicits nominations for toxicological studies. FDA nominated the review of mobile phone radiofrequency exposure and is providing some input to NIH on the experimental design of the animal studies. In addition, the Department of Commerce’s National Institute of Standards and Technology is providing some assistance to NIH on the design and measurement of the radiofrequency exposure systems to be used in the program’s animal tests. The Department of Defense has one of the world’s largest research programs on the health effects of radiofrequency energy, with approximately 50 to 60 full-time staff working on the issue in Air Force, Army, and Navy programs. Because the bulk of this research focuses on radar and on microwave-emitting weapon systems, it is not specifically related to mobile phones, but it does add to the general body of knowledge about the subject of radiofrequency health effects. One study being conducted by the Air Force, however, is closely related to mobile phone health effects—a $200,000 study on whether the low-intensity radiofrequency emissions characteristic of some mobile phones have an effect on the protective barrier that prevents the brain from being harmed by certain substances in the blood. EPA does not currently sponsor or conduct any research related to mobile phone health effects. EPA used to have a substantial in-house program of research on radiofrequency energy, but it was largely eliminated in the 1980s for budgetary reasons. However, EPA scientists with expertise in the area play an active advisory role with regard to research conducted by other federal agencies, foreign governments, and private researchers, and with regard to regulatory actions by FCC. In 1993, CTIA created a nonprofit organization to fund research on the health effects of mobile phone emissions. Although some useful research was conducted, questions have been raised about the productivity and accountability of that organization. A new industry-funded research initiative began in June 2000 that is largely focused on following up on the results of two studies under this previous effort. Unlike the prior effort, this new one involves direct participation and oversight by FDA. Responding to public concern that mobile phones may cause health problems such as brain cancer, CTIA, a trade association representing wireless telecommunications manufacturers and service providers, met in the early 1990s with FDA officials to discuss a possible research effort related to mobile phone health effects. FDA proposed that the two organizations engage in a cooperative research effort, but CTIA declined primarily because, they told us, they feared that government involvement would add bureaucratic complexity that would slow down the effort. Instead, on its own, CTIA established the Scientific Advisory Group on Cellular Telephone Research, whose goal was to develop, fund, and manage a research program assessing whether mobile phones pose a public health risk and, if so, what should be done to mitigate that risk. CTIA committed $25 million over 5 years to the group. Using input from outside scientists, the Scientific Advisory Group developed a research agenda that included multidisciplinary studies involving epidemiology, cell cultures, test animals, and dosimetry (the measurement of radiation). The group’s activities were reviewed by the Peer Review Board on Cellular Telephones, a board of outside scientists coordinated through Harvard University’s Center for Risk Analysis. In our 1994 report on mobile phone safety, we noted that the Scientific Advisory Group was being directly funded by CTIA on a month-by-month basis, an arrangement that could have raised questions about the objectivity and credibility of the research effort. In 1995, the Scientific Advisory Group was transformed into Wireless Technology Research, L.L.C. (WTR), a nonprofit organization financed by, but autonomous from, CTIA. WTR’s structure was designed to maintain independence from industry control. However, several representatives of federal agencies and industry, as well as members of WTR’s Peer Review Board, told us they believe that the structure set up for WTR resulted in too little accountability. WTR had a three-person board of directors, but the chairman of this board also served as the day-to-day manager of WTR’s activities and did not report directly to CTIA or to any other body. Our 1994 report recommended that FDA and EPA, in coordination with FCC, work with the Scientific Advisory Group to maximize the usefulness, independence, and objectivity of the group’s research effort. However, in the end, no federal agency had a role overseeing WTR’s research activities. FDA officials told us that they did not take an oversight role in WTR because it was a private organization not under FDA’s control and that, in any case, WTR rarely solicited input from FDA and did not always follow the input that was given. WTR spent about $28 million over 5 years, including about $25 million for research on the health effects of mobile phone emissions. A broad array of scientists and government and industry officials we spoke with said that some of the research sponsored by WTR was useful. However, they questioned both the type of projects WTR selected and the amount of research that was produced, given the financial resources it had available. In addition, WTR’s Peer Review Board raised concerns about WTR’s management in a July 1997 letter to the chairman of WTR. Among other issues, the board expressed concern that WTR was not always open and transparent, particularly with regard to its finances, and that decisions about the direction of its research agenda did not always follow the advice of outside experts. The chairman of WTR told us that in retrospect WTR should have been more transparent about its work and its finances. However, he said that WTR’s research agenda incorporated the input of a wide number of outside experts. He also said that WTR’s mission was broader in scope than just sponsoring research; it included tracking the emerging scientific information on the topic and identifying strategies for mitigating any public risk. The WTR effort eventually became caught up in public controversy. In May 1999, near the end of WTR’s funding period, the chairman of WTR issued a statement that while the results of WTR research did not show a need for public health intervention, the preliminary findings of two studies raised concerns that warranted follow-up research. The chairman stated that one study (see fn. 14) had found that human blood cells exposed to mobile phone frequency radiation showed genetic damage in the form of micronuclei, which is often considered a precursor to cancer. The second study (see fn. 11) was an epidemiological study that, according to the chairman, found a statistically significant risk of a certain rare type of tumor. However, the findings of this study were preliminary, the analysis of the data had not yet been completed, and the study had not yet been fully peer-reviewed or published. In addition, the principal researcher of this study disagreed with the chairman’s interpretation of his findings. The chairman of WTR told us that he decided to report on these studies before they were published because the potential public health threat of mobile phones made it important to report on the research developments as soon as possible. In the wake of the WTR controversy, CTIA decided to fund a research effort that would follow up on the two studies conducted under WTR that had raised questions, as well as assess what further research might be needed. The vehicle for this follow-up work is a cooperative research and development agreement (CRADA), signed in June 2000, between CTIA and FDA. In contrast to WTR, which had a broad mission, the scope of the CRADA is limited to addressing the concerns raised by the two previous studies and assessing what further research might be needed. Overall, the research planned under the CRADA represents a small piece of the ongoing research worldwide related to mobile phone safety, government officials and scientists in the area told us. Unlike the WTR effort, the CRADA involves the direct participation of FDA. CTIA officials told us that their experience with WTR taught them that FDA involvement would be beneficial because it would add accountability and scientific credibility to the new research effort. FDA’s role in the CRADA is to (1) determine what types of research studies should be conducted, (2) evaluate and prioritize the research proposals received, and (3) review and assess the results of the research. CTIA is administering the process for procuring the research, and the research studies themselves are being conducted by third parties via contracts with CTIA. Because these are private contracts, CTIA says they will not be made publicly available, although it does plan to release highlights of the contracts’ provisions. All of the research, as well as all costs incurred by FDA, is being paid for by CTIA, which retains the final authority to decide which proposals are chosen and funded. Thus, in contrast to WTR, the CRADA will not include a division between the funding source and management of the research. However, CTIA has said it intends to follow FDA’s recommendations concerning the research agenda. The request for proposals that CTIA issued in September 2000 for the first set of studies incorporated FDA’s recommendations with no changes. CTIA and FDA also told us they expect that the contracts with researchers will include provisions to ensure that the research results are published in peer-reviewed journals and that the research data are owned and controlled by the researcher, not by CTIA. An essential element in building public confidence about the independence and objectivity of this follow-up research effort is keeping the CRADA process open and accessible to the public. The FDA working groups that are developing research recommendations hold publicly announced open meetings. In addition, the research agendas that the working groups propose and the requests for proposals that CTIA issues are publicly available. However, at the time we completed our audit work, FDA had not yet decided the extent to which it would make public its recommendations to CTIA on which proposals to fund. If these recommendations are not publicly available in some form, it will not be possible to ensure that CTIA is following FDA’s recommendations. FDA officials told us that making their full recommendations public, including individual reviewers’ comments, would undermine the review process, which depends on anonymous reviewers providing candid critiques of research proposals. However, they said that they are considering ways of providing the public with a summary of their recommendations that would still protect the integrity of the review process. Although several federal agencies are involved in radiofrequency safety issues, FCC is responsible for regulating mobile phones. In 1996, FCC established rules setting a human exposure limit for radiofrequency energy from mobile phones, based on criteria developed by private standard- setting organizations and input from other federal agencies. Manufacturers are responsible for testing mobile phones to certify compliance with FCC’s exposure limit, but the industry does not have uniform testing procedures, which significantly increases variability in test results. An international standards-setting organization has been working since 1997 to develop uniform testing procedures. This effort is nearing completion, but there are still some testing issues to resolve. FCC has revised its own nonmandatory guidance on testing to reflect the procedures being developed by the standards-setting organization. However, FCC is waiting for the organization to complete its effort before issuing the revised guidance. In the area of staffing, FCC has been relying heavily on one staff specialist in radiofrequency exposure to review manufacturers’ test results for compliance with FCC’s exposure limit and to perform some in-house testing of phones. FCC has attempted to recruit an additional specialist but says that it is having trouble competing with the private sector for qualified applicants. Under the Federal Radiation Council Authority, transferred to EPA by Reorganization Plan No. 3 of 1970, EPA is responsible for, among other things, advising the President on radiation matters, including providing guidance to all federal agencies on formulating protective standards on radiation exposure. Upon presidential approval of EPA’s recommendation for formulating standards, the pertinent agencies would be responsible for implementing the guidance. EPA chairs the Radiofrequency Interagency Work Group, which coordinates radiofrequency health-related activities among the various federal agencies with responsibilities in this area. Members of the working group are EPA, FCC, FDA, the National Institute for Occupational Safety and Health, the National Telecommunications and Information Administration, and the Occupational Safety and Health Administration. NIH also participates in the working group. standards developed by the Institute of Electrical and Electronics Engineers, Inc. (IEEE) IEEE is a membership organization that develops industry standards, among other activities. ANSI is a nonprofit, private-membership organization that coordinates the development of voluntary national standards. The NCRP is a not-for-profit corporation chartered by the Congress to formulate and disseminate information, guidance, and recommendations on radiation protection and measurements. See Section 704(b) of the Telecommunications Act of 1996, Pub. L. No. 104-104, 110 Stat. 56 (1996). SAR is the widely accepted measurement of radiofrequency energy absorbed into the body in watts per kilogram (W/kg) averaged over some amount of tissue ranging from the entire body to 1 gram. exposure shown to cause adverse effects in animals. Because this limit is based on whole-body exposure, it was adjusted to account for the fact that mobile phones expose only a part of the body to radiofrequency energy. The resulting limit adopted by FCC for mobile phones is that their SAR levels may not exceed 1.6 watts per kilogram (W/kg) averaged over one gram of tissue. Some other countries have chosen to adopt a somewhat higher exposure limit than FCC. Because the only proven adverse health effects of radiofrequency exposure are caused by heat, the exposure limit is not designed to address the possibility of any non-heating-related effects, such as cancer. FCC says that given the lack of evidence of a non-thermal effect, the current exposure limit is reasonable, particularly since it incorporates a large safety factor for known heating effects. A federal court of appeals upheld FCC’s radiofrequency exposure guidelines (Cellular Phone Taskforce v. FCC, 205 F.3d 82 (2d Cir. 2000)), and earlier this year the Supreme Court denied petitions for certiorari challenging this decision. through the mixture, measuring the radiofrequency energy that is being absorbed at various locations. The phone is tested in several configurations, such as with its antenna extended and retracted, and at different frequencies. The phone’s certified SAR level is the highest SAR level measured during these tests. In order to receive FCC authorization, none of the SAR test results for the head or body can exceed FCC’s exposure limit of 1.6 W/kg averaged over 1 cubic gram of fluid. SAR test results for mobile phones can vary substantially because of measurement uncertainties and the use of different testing procedures. Variations due to measurement uncertainties are the result of limitations inherent in technological and human accuracy. For example, FCC officials said that small differences in the way different technicians set up the test, mix the tissue fluid, or calibrate the measurement instruments can introduce variation into the test results. Variations also occur because laboratories can use different testing procedures. When FCC established its mobile phone radiofrequency exposure limit in 1996, the industry did not have uniform standards for testing SAR levels. FCC published a technical bulletin in 1997 to assist manufacturers in complying with its radiofrequency exposure limits, but the bulletin was not intended to establish mandatory procedures for testing mobile phones. Supplement C to FCC’s Office of Engineering and Technology Bulletin 65, Evaluating Compliance with FCC Guidelines for Human Exposure to Radiofrequency Electromagnetic Fields (Dec. 1997). currently has one specialist in radiofrequency exposure who is responsible for reviewing applications that involve SAR testing. We found that FDA and FCC differ in whether or not they expect manufacturers to incorporate measurement uncertainty in determining compliance with radiofrequency safety limits. FDA rules state that microwave oven manufacturers must take all measurement errors and uncertainty into account when demonstrating compliance with FDA’s radiofrequency energy performance standard for these devices. An FDA official said that this rule essentially lowers a microwave oven’s maximum level of allowable radiofrequency energy leakage by the margin of measurement uncertainty. FCC, on the other hand, considers a phone to be in compliance if the manufacturer’s SAR test result is within FCC’s exposure limit, without incorporating the measurement uncertainty associated with the test result. However, to ensure compliance with the radiofrequency exposure limit, FCC looks for specific test procedures and parameters used by manufacturers that would tend to overestimate SAR. In the reviewer’s judgment, if an applicant’s testing procedures appear to contain irregularities or raise questions, the reviewer can request additional supporting data or further SAR testing. FCC officials responsible for drafting FCC testing guidance were not aware of FDA’s different treatment of measurement uncertainty when we discussed it with them. They told us that they intended to contact FDA to discuss this issue and obtain FDA’s views and advice. FCC says that standardizing SAR testing procedures could significantly reduce the variability in test results and speed up the FCC authorization process. In February 1997, IEEE began an effort to set uniform industrywide testing standards. Staff from FDA and FCC participate in this effort. After 4 years of work, IEEE’s standards-setting committee has made considerable progress in developing draft standards. Agreement appears to have been reached on many of the important issues, including standardizing the properties of the mixture that simulates human tissue and the testing positions of the phone. However, IEEE’s draft standards have not yet been finalized because some technical issues still need to be resolved within the committee. FCC considers the lack of uniform SAR testing standards to be a major concern. In October 1999, following a media report raising questions about SAR testing, FCC issued a press release stating that if the industry standard-setting committees did not act promptly to finalize standardized testing procedures, FCC would mandate action on its own. In keeping with this statement, FCC officials said that they have developed a draft revision of their 1997 guidance, which is more inclusive and incorporates features of the testing standards that IEEE is developing. FCC officials said that the issuance of their revised guidance is currently on hold pending the completion of IEEE’s testing standards. When we asked if FCC could immediately issue guidance based on those IEEE testing procedures that have already been agreed upon, FCC officials said that this could be done through an FCC public notice. They noted that they have already begun informally advising applicants to use certain of the most widely accepted elements of the test procedures under consideration by IEEE. FCC officials said that although IEEE’s new testing standards will reduce the variations in test results due to the use of different procedures, some level of measurement uncertainty is unavoidable. Thus, FCC officials said that, as with any measurement system, SAR tests can provide only a best estimate of a phone’s maximum SAR level. As noted above, the degree of measurement uncertainty depends on a number of factors, including the calibration of the equipment, the precision with which the technician makes the measurements, and the errors due to system instrumentation. Because of these measurement uncertainties, FCC officials said that a phone’s actual maximum SAR level could fall somewhere within a range of 30 percent above or below the phone’s test results (at a confidence interval of 95 percent), even with uniform IEEE testing procedures in place. An industry-funded project conducted by the University of Maryland in cooperation with FDA will attempt to determine more precisely the degree of measurement uncertainty that can be expected with the new IEEE testing standards. To verify the test data provided by mobile phone manufacturers, FCC is planning to conduct spot tests of some phones’ SAR levels at its Office of Engineering and Technology laboratory. Although FCC officials had hoped to have the facility operational by fall 2000, some needed equipment was still being procured at the time of our review. FCC officials noted that because FCC does not have the staff resources to test every mobile phone model that it authorizes, they can only test a sample of these phones. Even so, FCC faces a serious staffing problem in carrying out this initiative. Currently, FCC has only one radiofrequency exposure specialist to both oversee reviews of equipment authorization applications that involve radiofrequency exposure evaluation (about 50 a month, of which 15 to 20 are for mobile phones) and run the new testing facility. FCC and FDA officials have characterized this one specialist as being FCC’s key quality control point for determining whether mobile phones comply with FCC’s exposure limits. FCC officials said that they have tried to recruit another radiofrequency exposure specialist but were unable to find a suitable candidate because it is difficult to compete with the private sector for qualified individuals. They stated that they plan to continue their recruiting effort. To help cope with the current staffing situation, FCC recently trained members of its engineering staff to take over reviewing SAR testing reports under the supervision of the specialist. The goal is to have the specialist spend about half of his time overseeing SAR reviews and the rest of his time on the actual testing of phones’ SAR levels. FCC has also turned to Telecommunications Certification Bodies (TCB) to help process equipment authorization applications. A TCB is a private organization that FCC, the National Institute of Standards and Technology, and the American National Standards Institute have accredited to review applications and issue product authorization grants on behalf of FCC. TCBs are processing approximately half of equipment authorization applications, none of which involve SAR tests. Eventually, FCC plans to move the bulk of its application processing to TCBs, including the approval of applications that include SAR tests, while retaining oversight of the TCBs’ activities. At the time of our review, however, the transfer of additional authority to the TCBs had been placed on hold because of a lack of published uniform test procedures. In addition, FCC officials said that TCBs have experienced more difficulties in their application reviews than initially anticipated. FCC officials indicated that they are continuing their training and guidance efforts to improve TCBs’ overall performance. In the near term, all SAR reviews will be performed at the FCC laboratory. During the past year, as new research studies were published, the print and broadcast media have presented a variety of assessments about the potential health effects of mobile phones. Given this situation, the federal government’s role in providing the public with clear information on this issue is particularly important. FDA’s consumer information on mobile phone health issues, however, has not been revised since 1999 and does not reflect more recent studies and research developments. Both FCC’s Office of Engineering and Technology and its Consumer Information Bureau provide the public with information on radiofrequency exposure issues but do not meet general consumers’ need for clear and concise information. These shortcomings are a cause for concern because the industry is including FDA’s and FCC’s consumer information with most new mobile phones. FDA has a short information document, found on its Web site, called “Consumer Update on Mobile Phones.” The document, dated October 20, 1999, states that the available scientific evidence does not demonstrate that there are any adverse health effects associated with the use of mobile phones. However, FDA adds that there is not enough evidence to know for sure, either way, whether handheld mobile phones might be harmful. The document discusses several research studies, including the two WTR studies that are being followed up under the cooperative research and development agreement between FDA and CTIA. For consumers who want to take simple precautions to limit their exposure to mobile phone radiofrequency emissions, FDA’s update mentions some steps, such as avoiding extended conversations or using a headset while carrying the phone at the waist. Although informative, the update has not been revised since 1999, and consequently does not discuss the significance of major, recently published research studies that have been reported and debated in the media. An FDA official told us that the update had not been revised because the scientific picture had not changed significantly since then. Consumers, however, have no way of knowing this from the update and may be left in doubt about FDA’s views on recent research developments. Another problem with the update is that much of its discussion of health research is written in a technical manner that may be confusing to the general public. This issue is particularly important because CTIA has been using FDA’s consumer update as part of its voluntary program that enables manufacturers of mobile phones to receive CTIA certification that their phones meet certain performance, safety, and labeling standards. CTIA officials estimate that 70 to 75 percent of the mobile phones currently sold in the United States are certified under this program. One of the requirements for CTIA certification is that manufacturers include the text of FDA’s “Consumer Update on Mobile Phones” in the packaging of the phones. According to FDA, however, this document was not designed for mass distribution as an insert in mobile phone packaging. Rather, the information was for use in responding to inquiries received by FDA about the safety of mobile phones. The consumers’ primary source of information from FCC on radiofrequency exposure is its Office of Engineering and Technology’s (OET) “RF Safety Program” Web page. OET’s “RF Safety Program” Web page address is http://www.fcc.gov/oet/rfsafety/. OET Bulletin 56 (4th edition, Aug. 1999). This document includes the question: “Is it safe to use a cellular phone?” with this answer: “The ANSI/IEEE and NCRP RF safety guidelines recommend that low-power devices such as cellular hand-held phones not cause a localized exposure in excess of specific absorption rate (SAR) of 1.6W/kg. Studies of human head models using cellular phones have generally reported that the SAR levels are below 1.6W/kg level as averaged over 1 gram of tissue under normal conditions of use. However, some recent studies have reported higher peak levels under ‘worst-case’ conditions that suggest the need for further dosimetric studies.” significance is in relation to the health issue. Without a context for SAR numbers, consumers will have difficulty understanding what to make of the SAR information they find. OET officials noted that information on SARs is provided on its “RF Safety Program” Web page, which also contains instructions on using the equipment authorization database. However, we found that this information does not provide adequate consumer-oriented information on radiofrequency exposure and SAR issues. In addition, consumers may access the database directly, without first accessing any other FCC material, because organizations outside of FCC are providing the database’s Web address to consumers. For example, CTIA announced last summer that all new mobile phones receiving CTIA certification after August 1, 2000, would include labeling on the outside of the phone’s box that includes both the phone’s FCC identification number and the Internet address for the equipment authorization database. These CTIA-certified phones will also include text material inside the boxes that provides each phone’s SAR number and information on radiofrequency exposure issues. is no scientific evidence that proves wireless phone usage can cause cancer, increased blood pressure, memory loss, or other health problems,” though research is continuing. When they were asked to comment on it, OET officials shared our concern that this characterization could be misleading, because it implies that the health issue is settled. We also pointed out that the Bureau’s Web page did not direct consumers to information resources on radiofrequency exposure issues found elsewhere on the FCC Web site, such as OET’s documents. After we brought these issues to the attention of officials in the Consumer Information Bureau and OET, they began discussions to improve this situation. By the time we concluded our review, the Bureau had created Web links between its consumer Web page and OET’s RF Safety Web page and began working with OET to revise the Market Sense brochure. Though these steps to improve coordination are in the right direction, there is still a need for a consumer-oriented FCC document that provides lay readers with clear, concise, and accurate information on radiofrequency exposure and SAR issues. Scientific research to date does not demonstrate that the radiofrequency energy emitted from mobile phones has adverse health effects, but the findings of some studies have raised questions indicating the need for further investigation. The U.S. government sponsors and supports some research efforts on mobile phone health issues, but wider research efforts are under way internationally. The World Health Organization has identified priorities for research on mobile phone health issues, and a variety of organizations in Europe, the United States, and elsewhere, have begun efforts to address these research needs. Given the long-term nature of much of the research being conducted—particularly the epidemiological and animal studies—it will likely be many more years before a definitive conclusion can be reached on whether mobile phone emissions pose any risk to human health. While limited in scope, the cooperative research and development agreement between FDA and the mobile phone industry is among the research efforts being undertaken internationally that may help provide answers. Although the initiative is being funded solely by the industry, FDA’s active role in setting the research agenda and providing scientific oversight should help alleviate concerns about the objectivity of industry- funded research. However, FDA has not yet decided the extent to which it will make public its recommendations to CTIA as to which specific research proposals should be funded. There is no way for the public to be sure that CTIA is following FDA’s recommendations unless these recommendations are publicly available in some form. There still are no standardized procedures on how phones should be tested for compliance with FCC’s 1996 radiofrequency exposure limit. This results in substantial variation in testing, complicating FCC’s review of manufacturers’ test results. This variation could be reduced with uniform testing procedures, though the test results will still include some unavoidable measurement uncertainties. Having only one specialist to oversee the review of manufacturers’ SAR testing and operate FCC’s in- house mobile phone test facility also creates a human capital problem for FCC. FCC recognizes that additional resources are needed in this area, but is having difficulty competing with the private sector for qualified individuals. Given the prominence of the mobile phone health issue, FDA and FCC need to provide the public with clear, accurate, and timely information so that they can make informed decisions. The information that FDA and FCC provides consumers on health and radiofrequency exposure issues is not always up to date or written for a general consumer audience. Given that industry is including information from FDA and FCC with most new phones, it is particularly important that these shortcomings be corrected. We recommend that the Chairman of the Federal Communications Commission take the following actions: Direct the Office of Engineering and Technology to issue revised guidance on SAR testing procedures to reduce variations in test results caused by a lack of standardized procedures. This guidance should be kept current as industry standards evolve. Direct the Office of Engineering and Technology to consult with FDA on the advisability of adopting FDA’s method of incorporating measurement uncertainty in determining compliance with radiofrequency safety limits, and make the results of this communication publicly available. Direct the Consumer Information Bureau and the Office of Engineering and Technology to work together to develop clear, consistent, and easily accessible consumer materials on mobile phone radiofrequency exposure issues. In particular, these offices should modify the product authorization database Web site so that it links consumers to clear, concise information on radiofrequency exposure issues and the meaning of SAR data. Direct the Office of Managing Director, as part of human capital planning, to develop a strategy for meeting the need for additional expertise in radiofrequency exposure and testing issues. In addition, we recommend that the Administrator of the Food and Drug Administration direct the Center for Devices and Radiological Health to take the following actions: Publicly report on the extent to which CTIA is following FDA’s recommendations in choosing and funding the specific research proposals conducted under the cooperative research and development agreement between FDA and CTIA. Develop a new consumer update document that provides a current overview of the status of health issues and research related to mobile phones. Because the industry trade association requires manufacturers to include the text of this document in the packaging of mobile phones that it certifies, the document should be written with a broad consumer audience in mind. Given the fast pace of developments on these issues, FDA should revise this document as significant research and policy events occur. We provided a draft of this report to NIH, FDA, and FCC for review and comment. NIH recommended some technical changes, which we incorporated into the report where appropriate. FDA said that the report accurately summarizes the public health concerns relating to mobile phones, FDA’s role in addressing these concerns, and the current state of the scientific knowledge. FDA provided us with some technical changes, which we incorporated into the report where appropriate. FDA also said our recommendations to them—regarding the CRADA and consumer information efforts—are consistent with FDA’s plans and goals, and that it expects to implement them shortly. FCC said that the report appropriately describes the roles of federal agencies regarding radiofrequency energy health issues. It emphasized that because FCC does not have primary jurisdiction or expertise in health and safety matters, it relies on the guidance of other federal agencies and on expert standard-setting organizations to set exposure limits. FCC also provided certain clarifications to our draft, which we incorporated where appropriate. It also described actions that are planned or underway to address issues raised in our report, including those related to staffing, measurement uncertainty, and public information. FCC’s written comments and our responses appear in appendix I. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report for 30 days after the date of this letter. At that time, we will send copies to interested congressional committees; Michael K. Powell, Chairman, Federal Communications Commission; Dr. Bernard A. Schwetz, Acting Principal Deputy Commissioner, Food and Drug Administration; Dr. Ruth Kirschstein, Acting Director, National Institutes of Health; Mitchell E. Daniels, Jr., Director, Office of Management and Budget; and other interested parties. We will also make copies available to others upon request. If you have any questions about this report, please call me at 202- 512-2834. Key contacts and major contributors to this report are listed in appendix II. 1. We added text that further emphasizes that FCC is not a health and safety agency. 2. As we note in our report, FDA rules regarding microwave ovens state that manufacturers must take into account all of the measurement errors and uncertainty when demonstrating compliance with the radiofrequency energy performance standard for these devices. The issue we are raising is whether FCC should adopt a similar approach as part of its equipment authorization process for mobile phones. We have changed our report to emphasize that we are referring to differences in FDA’s and FCC’s approach to the uncertainties associated with manufacturers’ own testing. We look forward to the outcome of FCC’s continued consultations with FDA on this issue. In addition to those named above, Jason Bromberg, A. Don Cowan, Keith Cunningham, Gregory Ferrante, Janet Heinrich, and Mindi Weisenbloom made key contributions to this report. | The consensus of the Food and Drug Administration (FDA), the World Health Organization, and other major health agencies is that the research to date does not show radiofrequency energy emitted from mobile phones has harmful health effects, but there is not yet enough information to conclude that they pose no risk. Although most of the epidemiological and laboratory studies done on this issue have found no adverse health effects, the findings of some studies have raised questions about cancer and other health problems that require further study. The Cellular Telecommunication & Internet Association (CTIA) and FDA will jointly conduct research on mobile phone health affects. Although the initiative is funded solely by CTIA, FDA's active role in setting the research agenda and providing scientific oversight should help alleviate concerns about the objectivity of the report. The media has widely reported on the debate over whether mobile phones can cause health problems. Thus, the federal government's role in providing the public with clear information on this issue is particularly important. FDA has a consumer information update on mobile phone health issues but has not revised that data since October 1999. Consequently FDA does not discuss the significance of major, recently published research studies that have been reported in the press. FDA said that it has not revised the update because the scientific picture has not changed significantly. |
The anticipated rebuilding and repairing of residential and commercial structures in the Gulf Coast creates an important opportunity for incorporating energy efficiency improvements that could produce long- term energy cost savings. We estimated that newer building codes and standards could significantly reduce energy expenditures for residential and commercial buildings in Louisiana and Mississippi, depending on the rebuilding efforts in these states. The sheer magnitude of the reconstruction effort creates a tremendous opportunity for incorporating energy efficiency improvements into rebuilt homes and buildings. Many Gulf Coast neighborhoods and communities need to be rebuilt—some from the ground up—especially since an estimated 122,261 homes in Louisiana and Mississippi were destroyed or severely damaged. This rebuilding creates an opportunity for these states to make wide-scale improvements to their building stock, especially the older vintage housing in the areas. In addition, state and local governments in Louisiana and Mississippi are still engaged in short-and long-term planning efforts to recover from the hurricanes. Since these planning efforts are evolving, now is an opportune time to consider fully incorporating energy efficiency improvements in the reconstruction efforts. Furthermore, Louisiana’s and Mississippi’s recent adoption of newer and more energy efficient building codes creates a unique opportunity for rebuilding all of the destroyed and severely damaged homes in a manner that could result in significant energy cost savings for these two states. In partnership with DOE’s PNNL, we analyzed a range of energy efficiency levels to determine the potential energy cost savings that could be achieved if single-family homes and commercial buildings in Louisiana and Mississippi were constructed in accordance with various residential building codes and commercial energy standards. For residential buildings, we examined four energy efficiency levels associated with building in accordance with various codes—a “baseline” level, a “code” level, and two “above- code” levels. The baseline level we used represents the estimated energy efficiency associated with construction practices in areas of the Gulf Coast that do not have building codes or where the codes may not be enforced. The code level represents the energy efficiency associated with building in accordance with the energy provisions of the ICC’s 2006 residential code. The third level represents the energy efficiency associated with building to meet the Energy Star New Homes Guidelines, which requires a 15 percent improvement over the ICC’s code for all energy used in a house. The fourth level represents the energy efficiency necessary to qualify for the $2000 home builders’ federal tax credit for energy efficient new homes, which requires a 50 percent reduction in space heating and air conditioning energy use compared with the ICC’s code. We estimated that homes built to meet the ICC’s 2006 residential code could reduce energy costs between 24 to 28 percent, resulting in an aggregate annual savings ranging from $20 to $28 million, depending on the type of foundation used, the energy efficiency measures to which the homes are built, and the number of homes being rebuilt. More specifically, our analysis showed that, depending on the parameters of individual homes, an estimated annual per house energy cost savings ranging from $167 to $233 could be achieved if new homes were built in accordance with the ICC’s 2006 residential code, rather than current construction practices in the Gulf Coast region where there are no building codes or where codes are not enforced. Furthermore, greater home energy cost savings could be obtained if consumers rebuild their homes to meet Energy Star New Home Guidelines or if home builders take advantage of the energy efficient home tax credit provisions of the Energy Policy Act of 2005 (EPACT) by building homes that use 50 percent less energy for heating and cooling than those built to meet the ICC’s code. For example, annual per house energy cost savings of $310 to $364 over baseline levels could be achieved by meeting Energy Star Home specifications, while $371 to $447 in savings can be realized by building to meet the tax credit criteria in EPACT. The potential for Louisiana and Mississippi to achieve significant energy cost savings if the estimated 122,261 homes that were destroyed or severely damaged are rebuilt in accordance with various energy efficiency measures is shown in more detail in appendix II, table 1. In general, the improved energy efficiency features that are part of the ICC’s 2006 residential code, Energy Star New Home Guidelines, and the EPACT tax credit include more efficient windows and heating and cooling equipment, improved building envelope and duct sealing, and increased insulation. While building homes in accordance with the newer building codes and above code measures will improve a home’s energy efficiency, it will also increase home construction costs because more expensive and efficient energy features are required. However, these additional costs can generally be recovered within several years. Details on the cost recovery period for several key energy efficiency features can be found in appendix II, table 2. For commercial buildings—offices, hospitals, schools, and retail—we used the current commercial energy standards for Louisiana and Mississippi as baselines: the ASHRAE 2001 standard for Louisiana and the ASHRAE 1975 standard for Mississippi. We then estimated the potential energy cost savings associated with rebuilding commercial structures in Louisiana in accordance with the ASHRAE 2004 standard and in Mississippi in accordance with ASHRAE’s 2001 standard. We also estimated the potential savings that could be achieved by constructing buildings to meet “above code” levels, such as the requirements of the Leadership in Energy and Environmental Design (LEED) green building program and the EPACT commercial tax credit level, requiring 50 percent less energy use than the ASHRAE 2001 standard. The results of our commercial building analysis showed that an estimated annual energy cost savings for commercial buildings between 7 and 34 percent could be achieved in Mississippi if commercial structures were rebuilt in accordance with the ASHRAE 2001 standard and a savings of between 7 and 13 percent could be achieved in Louisiana if commercial structures were rebuilt in accordance with the ASHRAE 2004 standard. More detailed information on these potential savings is presented in appendix III, table 5. The primary reason for this significant savings is that the newer energy standards call for the use of less lighting power, which directly saves energy and indirectly reduces cooling needs because less heat is given off from lighting fixtures. Overall, adopting newer and more efficient commercial energy standards in the Gulf Coast would reduce energy operating costs as well as construction costs because the newer standards can be met with fewer, more efficient lighting fixtures resulting in immediate cost recovery. Our analysis also shows that greater energy cost savings could be obtained for commercial buildings if they were constructed in accordance with even higher energy efficiency measures. These efficiency measures include the LEED rating system, which awards points for buildings that use less energy than required by the ASHRAE 2004 standard and the federal tax credit level for commercial buildings. The energy cost savings associated with these two “above code” energy efficiency approaches could range from $17,263 to $286,285 per building, depending on the building type and size. Additional information about these potential savings are presented in appendix III, table 6. Some residential and commercial buildings damaged by the Gulf Coast hurricanes will not need to be replaced completely, but they will require repairs. Consumers who decide to repair homes or commercial structures can reduce their energy expenditures by replacing older and less efficient energy consuming equipment that may have been destroyed or damaged with more energy efficient products. We identified several common energy efficiency improvements that can be made to both residential and commercial buildings. For some items, such as cooling systems, minimum federal standards set by DOE require the manufacture of more efficient units than would have been used prior to the Gulf Coast hurricanes. Therefore, energy cost savings from these kinds of equipment could be achieved by simply replacing older equipment with a standard newer model. Some of the more common energy efficiency improvements include more efficient air conditioning systems, better insulating windows, and improved duct sealing. Although these systems are generally more costly than older, less efficient units, with the exception of window replacements, the additional costs can usually be recovered in a few years. Additional information on the estimated energy cost savings that these improvements could bring to both Louisiana and Mississippi is presented in appendix II, table 3. Residential consumers can also reduce their energy costs by replacing damaged incandescent lighting and appliances with compact fluorescent lighting (CFL) and Energy Star appliances. On a per house basis, switching to CFLs can save consumers an estimated $48 a year in electricity costs for lighting. Installing Energy Star appliances can produce modest annual dollar savings compared with appliances that simply meet the current minimum federal manufacturing standards. However, according to PNNL, if these appliances are used to replace older appliances that may be much less efficient, the costs savings can be considerable. According to Energy Star data, an Energy Star refrigerator is at least 15 percent more efficient than federal minimum manufacturing standards, meaning that it would save an estimated $9 a year over a new conventional refrigerator. Savings from replacing an older refrigerator could be much higher, for example $65 a year over a pre-1993 refrigerator. The additional costs and the energy cost savings that may be achieved if these lighting and appliance upgrades are made in the estimated 143,862 homes that received major damage is outlined in appendix II, table 4. Our analysis demonstrated that lighting upgrades are the primary area where energy cost savings can be achieved from renovating damaged commercial buildings in the Gulf Coast region. For example, if commercial buildings—offices, schools, hospitals, and retail—in Mississippi were renovated to meet the ASHRAE 2004 standard, rather than the state’s current standard ( the ASHRAE 1975 standard), the cumulative savings per building would be $18,689 to $150,538 per year depending on the building type. In contrast, renovating these same building types in Louisiana so that they go beyond the state’s current ASHRAE 2001 standard to meet the ASHRAE 2004 standard would result in $5,704 to $30,537 in annual savings per building. According to PNNL officials, all other building renovations pale in comparison to the impact that lighting changes would have in terms of producing energy cost savings for commercial buildings. Additional information about the potential energy cost savings associated with lighting in commercial buildings is presented in appendix III, table 7. Three substantial challenges may limit the energy cost savings opportunities presented by the Gulf Coast reconstruction from being realized. First, a general shortage of a skilled construction workforce and, specifically, the shortage of construction workers trained to meet newer building codes may limit energy cost savings. Second, states will face serious challenges ensuring compliance with newer building codes, thereby potentially limiting energy cost savings opportunities from being realized. Third, consumers who consider rebuilding and repairing their homes are faced with making other decisions that may make energy efficiency a low priority. The shortage of a skilled construction workforce capable of sustaining the rebuilding and repairing of destroyed and damaged homes in Louisiana and Mississippi may limit the energy cost savings that can be achieved by rebuilding to the newly adopted building codes. The construction workforce shortage is twofold—that is, there is a general shortage of construction workers and, more specifically, a shortage of skilled construction workers trained in the application of the newer building codes. A 2004 Department of Labor report cited an industry study that said in the year prior to the Gulf Coast hurricanes, nearly 75 percent of contractors nationwide reported experiencing skilled construction labor shortages. Louisiana and Mississippi builders told us that the labor shortage worsened when the hurricanes displaced some of their construction workforce to other states and caused an overwhelming demand for rebuilding and repairing destroyed and damaged residential and commercial buildings. Consequently, the demand for construction in the Gulf Coast region far exceeds the capacity of the local construction workforce. For example, a study conducted by the RAND Corporation reported that to sustain the rebuilding efforts in New Orleans, the city would have to expand its number of construction firms, labor force, and building supply networks. In addition, there is currently a lack of skilled construction workers trained to meet the states’ new building codes and standards. According to many different stakeholders with whom we spoke, building code training is an important part of ensuring that buildings are properly constructed to meet the newer building codes, including the energy provisions. Training the construction workforce will require time and involve a learning curve, which may delay or even limit the energy cost savings achieved during the Gulf Coast reconstruction. According to state officials and home builders that we spoke to, prior to the Gulf Coast hurricanes the general construction workforce in Louisiana and Mississippi did not have to comply with any particular statewide building codes, and some parishes and counties had no residential building codes to guide home construction. As a result, there was not an overwhelming need for the general construction workforce to be familiar with the building codes developed by the ICC. However, the construction workforce in Louisiana and the five coastal counties in Mississippi will now need training on the application of the newer building codes that include wind, flood, and energy provisions. This is especially true for Louisiana, since it adopted mandatory statewide building codes. Home builders, energy efficiency practitioners, state officials, and non-profit organizations with whom we spoke acknowledged that fully implementing newer building codes will take time and will involve a learning curve before construction workers understand and are able to comply with the requirements. State officials and home builders told us that it will be difficult for local home builders— consisting of small volume builders—-to make the transition from not building according to a building code to now constructing buildings to meet the requirements of the most recent residential codes. In addition, according to the National Association of Home Builders, the ICC’s energy code has caused problems for home builders because they have trouble finding the lowest cost solution that also complies with the code. All of these challenges may delay or even limit the energy cost savings. In an effort to address the skilled construction workforce shortage, the Business Roundtable—an association of chief executive officers of leading U.S. companies with $4.5 trillion in annual revenues and more than 10 million employees—in partnership with federal, state, and local government agencies, construction trade groups, businesses, and non- profit organizations, created the Gulf Coast Workforce Development Initiative as an effort to recruit and train up to 20,000 skilled construction laborers for the Gulf Coast region by the end of 2009. Recruitment efforts for this initiative are under way through the Gulf Rebuild, Education, Advancement, and Training (GREAT) Campaign. Under this campaign, participants enroll in a 4-week course to gain entry-level skills in preparation for jobs in the construction industry. In addition to the GREAT Campaign, there are other efforts under way to build a skilled construction workforce in the Gulf Coast states, including courses and related workshops at local colleges and universities and construction and building summits/expos being offered throughout the Gulf Coast states. Having an adequate number of trained code officials to inspect buildings is vital to ensuring that rebuilding the hundreds of thousands of destroyed and damaged structures is done in accordance with the newly adopted building codes so that energy cost saving opportunities are actualized. However, building industry representatives and state officials told us that Louisiana and Mississippi lack code offices, lack an adequate number of code officials, and may find it difficult to secure the resources to hire a sufficient number of adequately trained staff. Despite these challenges, however, efforts to enforce the new codes and standards in Louisiana and Mississippi are currently under way. Louisiana and Mississippi may not have adequate resources to open additional code offices and may not currently have adequate numbers of trained staff. For example, only a few Louisiana parishes and Mississippi counties have code compliance and enforcement programs, and implementing the new building codes will require more building code offices to be established. According to one Louisiana code official, because 57 of the state’s 64 parishes did not have to comply with any mandatory statewide building codes before the Gulf Coast hurricanes, there was no need for building code offices in those particular parishes. In Mississippi, only those five coastal counties affected by the hurricanes are required to meet the new statewide building codes. According to Mississippi officials, despite the fact that three of the five counties had building codes and offices in place prior to the hurricanes, these counties will still need to hire and train additional code officials because of the overwhelming amount of rebuilding that remains and the new building codes. In addition, there was a consensus among the groups we interviewed that building code offices are currently overburdened, because there are too few officials and too many inspections. Furthermore, Louisiana and Mississippi will face serious challenges in securing the adequate staff and resources to support code enforcement. Both states reported that the local governments in the most severely affected parishes and counties have limited financial resources to provide staff to implement the newer building codes. State officials, home builders, and non-profit organizations pointed out that code officials are taking other jobs in the private sector, which means code offices will have to fill those vacated positions as well as hire and train additional code officials. According to 1 state official in Louisiana, there were only 35 code inspectors statewide, only 7 of whom were certified to enforce the ICC building code recently adopted by the state that includes energy provisions. Furthermore, local governments will face challenges in training code officials and code users in the application of the new building codes. Building codes are inherently complex and technical, thereby potentially affecting compliance and enforcement, especially for larger commercial buildings. One study on compliance and enforcement methods reported that enforcing energy codes may require a higher level of expertise, and found that some local governments hire multiple code officials with specialized areas of expertise. Another study suggests that the complexities of energy codes make them impossible to enforce without a labor-intensive review of energy plans and documentation supported by extensive investments in hardware, software, training, and other resources. Energy efficiency practitioners suggest that education and training are critical during implementation, and that adopting jurisdictions must prepare code officials to enforce the energy code and prepare the building industry to comply with the code. According to one study, the inability to ensure compliance with energy codes will risk failing to capture the energy efficiency and cost savings they are designed to achieve. Despite the challenges, efforts to implement the new codes and standards in Louisiana and Mississippi are currently under way. For example, according to Louisiana Code Council officials, to some extent parishes have been enforcing the new building code since February 2006. The 11 most affected parishes have collaborated with surrounding governmental bodies to expand their existing offices or hired third-party service providers. One official estimated that the number of code officials in the state has increased from about 35 to 100, mainly because the Louisiana Code Council is giving existing code officials, who are not certified to enforce the new code, up to 3 years to acquire their certification as they continue to conduct building inspections. Moreover, as of December 2006, the state had allocated $8 million for those parishes that did not previously have building code offices. Furthermore, Louisiana has a $14 million program, funded by Federal Emergency Management Agency (FEMA) funds, to provide assistance to local governments as they implement the new statewide building codes. The Mississippi Development Authority is using HUD funding to administer a $5 million grant program to coastal county governments to hire additional building code officials and inspectors to ensure compliance with the new building codes. The program also intends to help to fund salaries, fringe benefits, travel, and training for building code enforcement officials for 1 year. Finally, Louisiana and Mississippi state energy office officials are providing education and training to code users to encourage the incorporation of energy efficiency and sustainable practices into the rebuilding of the state. According to Louisiana officials, they will continue to provide training on energy codes and compliance methods, sponsor energy efficiency projects, and work with experts and universities to host forums to provide hands-on, project-specific, one-on-one assistance to those rebuilding and repairing destroyed and damaged structures. Officials from the Mississippi state energy office said that they are conducting similar efforts in their state. According to state officials, home builders, and non-profit organizations in Louisiana and Mississippi, consumers who desire to return to their homes face difficult financial questions regarding compensation payments, the higher costs of construction and insurance, and the availability of employment, which may make decisions about energy efficiency a low priority. Some state officials and non-profit organizations believe that compensation payments awarded to homeowners may not be enough to cover their mortgage balances or rebuilding costs. Qualified Louisiana and Mississippi homeowners may receive up to $150,000 in financial assistance from their state’s homeowner’s assistance program, which is funded by the federal government. However, the most recent available data show that the average amount received by residents in Louisiana and Mississippi is about $75,177 and $70,045, respectively. Representatives from non-profit organizations with whom we spoke told us that in some cases, homeowner mortgage balances and rebuilding costs exceed the payment amounts, leaving a funding gap that homeowners will have to fill. In addition, state officials whom we spoke with told us that the housing program does not provide additional funds to use for energy efficiency, thus homeowners will have to pay any additional costs associated with making their homes more energy efficient. According to home builders, non-profit organizations, and energy efficiency practitioners, homeowners may also have to consider the additional construction costs associated with new elevation requirements. That is, some consumers will have to consider the additional costs to elevate their homes. Although FEMA provides $30,000 to cover the costs for building to higher elevations, it may cost more than that to build in some neighborhoods, based on FEMA’s advisory base flood elevations and local parish and county community decisions to implement higher elevation requirements, according to some home builders. Representatives of a state home builders association told us that it can cost as much as $40,000 to more than $100,000 depending upon the house. According to state officials, home builders, and non-profit organizations, homeowners continue to deal with insurance claims and face difficult decisions about future coverage in light of higher insurance costs, if any coverage is available at all. By some news reports, insurance premiums have doubled or tripled in some areas. Increasing insurance costs may affect consumers purchasing decisions regarding energy efficiency, thus limiting energy cost savings opportunities presented by the Gulf Coast reconstruction from being realized. State officials and non-profit organizations told us that homeowners also will have to decide whether existing employment opportunities make returning to their homes feasible. Many residents lost their jobs when infrastructure was destroyed and employees and customers were displaced. The employment level statewide in Mississippi returned to their pre-hurricane levels, while levels in the hardest hit area remained down, as did the rate in Louisiana. In the absence of employment opportunities, many residents will likely not return to their homes. Without adequate employment opportunities, even those residents who do return are likely to face financial hardships that will make decisions about repairing or rebuilding their homes in an energy efficient manner a low priority. Even after addressing these issues, homeowners will have to decide whether it is in their best financial interest to pay the additional costs to make their homes more energy efficient through purchases, such as energy efficient appliances, or to use their money for other purposes. For consumers, especially poor and low-income consumers, this decision may be compounded by their loss of income, assets, and other financial needs that will have to be met. One study we reviewed suggest that among the most important barriers generally affecting consumers and their purchasing decisions are limited information, limited awareness and interest in energy costs and reducing energy expenses; and limited capital and rapid payback requirements. Consumers are less likely to voluntarily adopt energy efficiency measures without financial incentives and education on the costs and benefits. Because the rebuilding of the Gulf Coast is largely a state and local matter, HUD and DOE have played a supportive role in promoting energy efficient rebuilding. More specifically, HUD and DOE have provided financial and educational resources that can encourage the incorporation of energy efficiency in the reconstruction of the Gulf Coast. In addition both agencies have broader national programs that may assist Louisiana and Mississippi in incorporating energy efficiency improvements during their rebuilding. HUD officials told us that they provided the affected Gulf Coast states with funding that can be used for, among other things, rebuilding in an energy efficient manner. Congress has appropriated a total of $16.7 billion in Community Development Block Grants (CDBG) supplemental funding that has been allocated for use in the five affected Gulf Coast states for general rebuilding. These grants afford states a great deal of discretion in designing, rebuilding, and repairing housing; in neighborhood revitalization; and in economic development activities. The federal coordinator for Gulf Coast rebuilding has said that the CDBG program allows state leaders “who are closest to the issues” to make decisions regarding how the money should be spent. In Louisiana and Mississippi, these funds are mostly being used for restoring housing infrastructure. To receive CDBG funding, Louisiana and Mississippi as well as the other affected Gulf Coast states were required to submit a Disaster Action Plan—an overall plan for short-and long-term disaster recovery—to HUD for review and approval. States were required to describe, among other things, how their Disaster Action Plan would encourage construction methods that emphasize energy efficiency and promote the enactment and enforcement of modern building codes as part of their rebuilding process. HUD officials said they also have been working with Louisiana and Mississippi homeowner assistance programs to target CDBG funds to better assist states and consumers in rebuilding homes that are more energy efficient, safer, and storm resistant. In addition, HUD officials told us that they encourage public housing authorities to use energy efficient construction practices, appliances, and equipment. According to HUD, this was the case when the department approved and funded a $22 million grant to the Housing Authority of New Orleans and $7 million in grants to the Biloxi Mississippi Housing Authority from its Capital Fund Reserve for Emergencies and Natural Disasters to rebuild, repair, modernize, and improve the energy efficiency of damaged public housing units. HUD officials told us that they also have disseminated information on energy efficiency to public housing authorities and participated in educational and training activities to assist state and local offices, consumers, and builders with considering energy efficient rebuilding. For example, the department distributed a special disaster recovery edition of its Public Housing Energy Conservation Clearinghouse e-newsletter, outlining energy efficiency measures that public housing authorities and residents can take to save energy and reduce utility costs. In addition, HUD was involved in several reconstruction activities that while focused on hurricane preparedness and reconstruction, also provided information on energy efficiency. These activities included the Mississippi Governors Reconstruction “Expo” where HUD disseminated extensive materials on its Partnership for Advanced Technologies in Housing (PATH) program, and the release of HUD “Tech Sets” on storm-resistant roofing and wind resistant openings for use by homeowners, builders, and community officials in the affected Gulf Coast states. HUD also has actions that were planned or under way prior to the Gulf Coast hurricanes that are designed to improve the energy efficiency of the nation’s public housing stock and that could potentially benefit the Gulf Coast states. These actions included the following: HUD’s Energy Task Force developing standard training program modules to promote energy efficiency in both new and existing HUD-assisted and financed housing. HUD also will develop materials on ways to improve household energy efficiency for housing authorities to disseminate to public housing residents. HUD, through its new Partnership for Home Energy Efficiency with DOE and the Environmental Protection Agency, working to ensure that information on Energy Star products and appliances, Energy Star Qualified New Homes, and Home Performance with Energy Star for existing homes is available for distribution to public housing authorities, grant recipients, property managers, and new Federal Housing Administration (FHA) homebuyers. HUD improving its tracking and monitoring of energy efficiency in pubic housing with an automated system to provide public housing authorities with data that serves as an indicator of the relative efficiency of individual properties and their potential for energy savings. In its capacity as the nation’s lead agency on energy efficiency issues, DOE’s primary role in the Gulf Coast reconstruction has been to support states by provide training and education to state and local officials, private industry, and consumers. In direct response to the Gulf Coast hurricanes, DOE partnered with several entities, including state energy offices, to conduct training workshops on rebuilding with energy efficiency and storm-resistance practices for home builders, contractors, and consumers. For example, DOE, in partnership with HUD’s PATH program, Home Depot, and Entergy Corporation, sponsored free home repair workshops in Louisiana and Mississippi that highlighted energy efficiency. Attendees had the opportunity to receive hands-on instructions on repairing storm damaged roofs, ceilings, walls, and floors; installing windows, doors, and hurricane shutters; and improving a home’s energy efficiency and durability. DOE also responded to a request from the Louisiana State Energy Office to provide Web-based code training sessions to architects, engineers, and code officials to train them on how to comply with the 2005 Louisiana ASHRAE Commercial Energy Building Code as they renovate and replace commercial buildings. DOE also made educational resources available to all parties involved in the rebuilding efforts by developing a Disaster Recovery and Building Reconstruction Web site (www.eere.energy.gov/buildings) to (1) provide various educational resources to state and local officials, builders and contractors, and consumers and (2) promote cost-effective and energy- efficient reconstruction. This Web site includes information on energy efficiency and rebuilding training opportunities and a wide range of guidelines, fact sheets, and case studies developed by DOE, HUD, FEMA, the National Association of Home Builders, and other organizations. DOE has taken other actions to encourage parties involved in the rebuilding process to consider energy efficiency. For example, it awarded a $100,000 grant to Louisiana, Mississippi, and other affected Gulf Coast states to incorporate energy efficiency and sustainable design practices into their rebuilding strategy. DOE also partnered with state energy offices to encourage the regional exchange of information and best practices. As part of its partnership with states, DOE hosted the Katrina Green Informal Working Group, a biweekly conference call with various federal and state officials, industry associations, builders, nonprofit organizations, and energy efficiency and housing experts, aimed at networking and sharing information about the rebuilding efforts in Gulf Coast states. DOE officials said that the agency plans to continue its efforts to encourage Louisiana and Mississippi and other affected states to rebuild more energy efficiently. Finally, DOE also has ongoing nationwide energy efficiency initiatives to assist all states with their own energy efficiency initiatives through several national programs and projects including the following: Federal-State Partnership Projects: DOE recently awarded $6 million to fund 22 federal-state partnerships that will help implement training programs and provide technical assistance and education that is intended to ultimately result in the construction of more energy efficient buildings. Louisiana and Mississippi were among the states that were awarded partnership grants. Louisiana’s project proposal, entitled Gulf Region High Performance Homes Program, is intended to spur market transformation in Louisiana and the Gulf Coast region through educational outreach, demonstration, technical assistance, and training on locally appropriate, hazard-resistant, energy-efficient, and healthy-building science and technologies. The goal of Mississippi’s proposal, entitled Promoting Energy Codes and “Beyond Code” Programs through EPACT Tax Incentives, is to integrate building energy codes and “better than code” programs using the tax incentives of EPACT as a coordinating framework, and to promote building energy codes, DOE Building America approaches, and Energy Star Home procedures as avenues for qualifying for the buildings-related tax incentives in EPACT. State Energy Program (SEP): DOE’s SEP provides grants to the states to design and carry out their own renewable energy and energy efficiency programs. Funding from SEP goes to state energy offices in all states and U.S. territories. States use these grants to address their energy priorities and to adopt emerging renewable energy and energy efficiency technologies. SEP projects are managed by state energy offices, not by DOE directly. In 2006, DOE provided over $650,000 in SEP grants to Louisiana and about $400,000 to Mississippi. Weatherization Assistance Program: This program enables low-income families to permanently reduce their energy bills by making their homes more energy efficient. According to DOE, it is this country’s longest running and perhaps most successful energy efficiency program. During the last 30 years, DOE’s Weatherization Assistance Program has provided weatherization services to more than 5.5 million low-income families. DOE reported that, on average, weatherization reduces overall energy bills by $358 per year at current prices. In 2006, about $2 million in weatherization funds were provided to Louisiana and about $1.9 million went to Mississippi. While the current level of reconstruction and the difficulties surrounding the return of residents is unsettling for both individuals and communities, the nature and status of rebuilding actually creates significant opportunities for incorporating energy efficiency measures into reconstruction and rebuilding efforts. Nonetheless, as great as the potential opportunities are, the challenges that must be overcome to capitalize on these opportunities and actually achieve energy cost savings are equally significant. Since most of the reconstruction in Louisiana and Mississippi is still in the planning phase, there is still time to address the challenges of incorporating energy efficiency in the rebuilding of the Gulf Coast. Meeting these challenges will undoubtedly benefit consumers, the Gulf Coast region, and the nation. While the rebuilding of the Gulf Coast is largely a state and local matter, HUD and DOE have provided states and consumers with funding and educational resources to assist in the largest reconstruction effort in the nation’s history. Going forward, there will be a growing opportunity to incorporate energy efficiency measures during the rebuilding process—as states and local governments decide on how and to what extent to implement and enforce new building codes, and consumers begin to make decisions about whether making energy efficient choices is in their best financial interest. Given that improved energy efficiency measures, such as updated building codes and energy efficient building materials are new to the Gulf Coast region, states and consumers can greatly benefit from DOE expertise in these areas. DOE expertise as well as HUD and DOE resources may prove invaluable to states and consumers as they make decisions about building code training and enforcement, energy efficiency construction practices, and purchasing energy efficient appliances and equipment. We provided a draft of this report to DOE and HUD for their review and comment. DOE provided technical and clarifying comments, which we incorporated as appropriate. HUD had no comments on the report. We are sending copies of this report to interested congressional committees, the Secretary of Energy, the Secretary of Housing and Urban Development, and other interested parties. We will also make copies available to others on request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions regarding this report, please contact me at (202) 512-3841 or gaffiganm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. During our review, our objectives were to (1) analyze the extent of opportunities for incorporating energy efficiency improvements and realizing energy cost savings in the Gulf Coast reconstruction, (2) discuss potential challenges to realizing energy cost savings during the reconstruction, and (3) describe the role of Department of Housing and Urban Development (HUD) and the Department of Energy (DOE) in promoting energy efficiency in the rebuilding of the Gulf Coast. To estimate potential energy cost savings from rebuilding and repairing residential and commercial structures on the Gulf Coast, we worked with DOE’s Pacific Northwest National Laboratory (PNNL). PNNL modeled the levels of energy efficiency that could be achieved if the buildings were rebuilt or repaired to meet newer building codes and standards or “above code” levels, and compared these measures with a baseline that approximately reflected the energy efficiency of these buildings prior to the Gulf Coast hurricanes. Separate analyses were conducted for representative residential and commercial building types. We worked with PNNL in developing the model assumptions, including the size and characteristics of representative residential and commercial buildings, the building codes and standards that were used, the future costs of fuels, the heating and cooling climate of the area, the discount rate used for consumers’ valuation of future fuel cost savings from more energy efficient equipment and materials. We found PNNL’s models and assumptions reasonable and sufficiently reliable for the purposes of this report. For a representative residential Gulf Coast home, PNNL modeled several energy efficiency scenarios—two baseline measures, an energy code level, and two “above code” levels. PNNL used an energy simulation tool developed at the Florida Solar Energy Center and DOE’s Energy Information Administration forecasts of natural gas and electricity prices. PNNL also modeled the efficiency gains that could be achieved by bringing Gulf Coast commercial buildings into compliance with current, more efficient, energy standards for four prototypical buildings—offices, schools, hospitals and retail. PNNL estimated the annual energy cost savings associated with three levels of energy standards—baseline efficiency, the current code’s higher-efficiency, and “above code” building standards. To aggregate potential residential energy cost savings from rebuilding or repairing destroyed and damaged homes in the Gulf Coast region, we used PNNL’s estimates of annual energy cost savings for a representative home built to different levels of energy efficiency and federal estimates of the aggregate number of these homes to estimate the scope for savings. We reviewed the methodology used to estimate the damaged and destroyed homes, including the steps that were taken to ensure the reliability of these data and were satisfied that the estimates were satisfactory for our purposes. To understand the potential challenges that may limit energy cost savings from being realized, we relied on site visits to Louisiana and Mississippi, interviews with state government officials, and attendance at local building conferences and housing summits. Furthermore, we interviewed energy efficiency practitioners, building industry representatives, and non- profit organizations as well as HUD and DOE officials to solicit their views on the challenges of incorporating energy efficiency measures in the rebuilding and repairing of destroyed and damaged buildings. To describe the role of HUD and DOE in promoting energy efficiency in the rebuilding of the Gulf Coast, we interviewed agency officials and obtained and reviewed documentation describing the actions that these agencies have taken to assist Louisiana and Mississippi. We also conducted site visits to these states to obtain firsthand knowledge from state government officials, non-profit organizations, home builders, and energy efficiency practitioners about their views on HUD’s and DOE’s efforts to promote or work with various stakeholders to consider energy efficiency in the rebuilding process. We conducted our work from March 2006 through May 2007 in accordance with generally accepted government auditing standards, which included an assessment of data reliability. Tables 1 through 4 contain energy cost savings estimates for homes built in accordance with various energy efficiency standards and for homes repaired with selected energy efficiency-related improvements. Tables 5 through 7 contain energy cost savings estimates for commercial buildings— office, school, hospital, and retail buildings—constructed in accordance with various commercial building energy standards, to “above code” levels, and with more efficient lighting requirements. On a per building basis, we estimated the energy cost savings that could be achieved i Mississippi and Louisiana by moving from their current energy standards to the LEED 1- point and 10-point levels as well as the federal tax credit level, as shown in tables 6 and 7. In addition to the contact person named above, Dan Haas, Assistant Director; Mark Braza; Jacqueline Cook; John Delicath; Yvette Gutierrez- Thomas; Raun Lazier; Paul Pansini; Anne Stevens; and Barbara Timmerman made key contributions to this report. | Following several hurricanes in 2005, the need to rebuild and repair destroyed and damaged homes and buildings in the Gulf Coast region may create opportunities for making energy efficiency improvements and realizing energy cost savings. While numerous federal agencies are involved in the recovery process, the Department of Housing and Urban Development (HUD) and the Department of Energy (DOE) interact with the states on a regular basis regarding matters of energy efficiency. This report, initiated under the authority of the Comptroller General of the United States, examines (1) the extent of opportunities for incorporating energy efficiency improvements in the Gulf Coast reconstruction, (2) potential challenges to realizing the energy cost savings during the reconstruction, and (3) the role of HUD and DOE in promoting energy efficiency in the rebuilding of the Gulf Coast. GAO limited the scope of its work to Louisiana and Mississippi since these states experienced the majority of the hurricane damage. GAO assessed opportunities for incorporating energy efficiency measures by conducting site visits and interviewing federal, state government officials; home builders; and energy efficiency experts. GAO also worked with a DOE national laboratory to develop energy cost savings estimates. GAO is making no recommendations. Reconstruction in the Gulf Coast creates a significant opportunity for incorporating energy efficiency improvements that could produce long-term energy costs savings in residential and commercial buildings. The sheer magnitude of the reconstruction effort and Louisiana's and Mississippi's recent adoption of more energy-efficient building codes makes this an opportune time for incorporating energy efficiency improvements in the rebuilding efforts. In partnership with a DOE national laboratory, GAO analyzed energy cost savings opportunities and estimated that adopting these newer building codes could reduce residential energy costs in these two states by at least $20 to $28 million per year, depending on the extent of the rebuilding efforts in these states. Furthermore, the analysis also showed that annual energy expenditures for commercial buildings--hospitals, schools, offices, and retail buildings--built to newer energy standards could be about 7 to 34 percent lower than buildings built to older standards. There also are opportunities for consumers to make additional energy efficiency improvements to both building types by replacing old, damaged equipment. There are three substantial challenges to realizing the energy cost savings opportunities presented by the Gulf Coast reconstruction: (1) the shortage of a skilled construction workforce, and specifically, the shortage of workers trained to meet the newer building codes; (2) the lack of trained building code inspectors to ensure compliance with newer building codes in Louisiana and Mississippi; and (3) the difficult financial issues facing consumers, such as the sufficiency of insurance and other compensation payments, that may make decisions about energy efficiency a low priority. States have efforts under way to address many of these challenges and it will take time and sustained commitment for them to be successful. The rebuilding of the Gulf Coast is largely a state and local matter, but HUD and DOE have played a supportive role in promoting energy efficient rebuilding. HUD and DOE have provided financial and educational resources that can encourage energy efficient rebuilding, and both agencies have broader national programs that may support energy efficiency improvements in the rebuilding of the Gulf Coast. HUD has made $16.7 billion in funding available for general rebuilding purposes, such as restoring damaged housing, and allows states to determine how to spend these funds, including using them for energy efficient improvements. HUD also has several national initiatives that may directly improve the energy efficiency of the public housing stock in Gulf Coast states. DOE has sponsored education and training on energy efficiency issues to state and local officials, private industry, and consumers in Louisiana and Mississippi. As part of its nationwide effort to assist all states with energy efficiency initiatives, DOE provides grants to states to design and carry out their own energy efficiency programs. DOE's energy expertise as well as HUD and DOE resources may prove valuable to the states and consumers as they make decisions about energy efficient rebuilding in the Gulf Coast. |
DOJ awards federal financial assistance to state and local governments, for-profit and nonprofit organizations, tribal jurisdictions, and educational institutions, to help prevent crime, assist victims of crime, and promote innovative law enforcement efforts. Federal financial assistance programs provide funding pursuant to statutory authorization and annual appropriations through formula grants, discretionary grants, cooperative agreements, and other payment programs, but are all generally referred to as grants. From fiscal year 2005 through fiscal year 2012, approximately $33 billion has been appropriated to support the more than 200 grants programs that DOJ manages. DOJ administers its grant programs through three granting agencies—the Office of Justice Programs (OJP), the Office on Violence Against Women (OVW), and the Community Oriented Policing Services (COPS) Office. OJP is the largest of DOJ’s granting agencies, and its mission to develop the nation’s capacity to prevent and control crime, administer justice, and assist crime victims is broader than that of OVW or the COPS Office. OJP’s bureaus and offices administer grant programs that address victim assistance, technology and forensics, and juvenile justice, among other things. One such grant program is the BVP program, which was created following enactment of the Bulletproof Vest Partnership Grant Act of 1998, and provides grants on a competitive basis to state and local law enforcement agencies to assist in their purchasing of ballistic-resistant and stab-resistant body armor. The COPS Office grant programs focus on advancing community policing, which generally involves cooperation between police departments and community residents in identifying and developing solutions to crime problems. OVW administers grant programs related to domestic violence, dating violence, sexual assault, and stalking. DOJ and Treasury both operate asset forfeiture programs that are designed to prevent and reduce crime through the seizure and forfeiture of assets that represent the proceeds of, or were used to facilitate, federal crimes. Each department also maintains a separate fund that is the receipt account for the deposit of forfeitures. Over the years, a series of laws has been enacted that has expanded forfeiture from drug offenses to money laundering, financial crimes, and terrorism-related offenses. In addition to depriving criminals of property used or acquired through illegal activities, these programs are designed to enhance cooperation among foreign, federal, state, and local law enforcement agencies through the equitable sharing of assets recovered through the program, and, as a by- product, produce revenues in support of future law enforcement investigations and related forfeiture activities. A number of federal law enforcement organizations participate in DOJ’s Assets Forfeiture Fund (AFF), including the U.S. Marshals Service, which serves as the primary custodian of seized and forfeited property for the program. Once property is forfeited to the government, it is subsequently sold, put into official use, destroyed, or transferred to another agency. Cash and monetary instruments that have been forfeited and property that has been forfeited and sold are subsequently deposited in the forfeiture fund. In fiscal year 2012, the value of total assets in the AFF was approximately $5.97 billion. Money collected in the funds is used to pay for expenses related to the asset forfeiture program and for other law enforcement initiatives. DOJ, the Department of Homeland Security (DHS), and the Office of National Drug Control Policy (ONDCP) operate or support, through grant funding or personnel, five types of field-based information-sharing entities that may collect, process, analyze, or disseminate information in support of law enforcement and counterterrorism-related efforts, as shown in table 1. In general, the five types of entities in our review were established under different authorities and have distinct missions, roles, and responsibilities. As of January 2013 there were a total of 268 of these field-based entities located throughout the United States, and DOJ, DHS, and ONDCP provided an estimated $129 million in fiscal year 2011 to support three of the five types of entities. In July 2012, we reported that DOJ’s more than 200 grant programs overlapped across 10 key justice areas, and that this overlap contributed to the risk of unnecessarily duplicative grant awards for the same or similar purposes. We also recognized that overlapping grant programs across programmatic areas result in part from authorizing statutes. Further, we recognized that overlap among DOJ’s grant programs may be desirable because such overlap can enable DOJ’s granting agencies to leverage multiple funding streams to serve a single justice purpose. However, we found that the existence of overlapping grant programs is an indication that agencies should increase their ability to monitor where their funds are going and coordinate to ensure that any resulting duplication in grant award funding is purposeful rather than unnecessary, and we made recommendations to reflect these needed improvements. In addition, we found that OJP, OVW, and the COPS Office did not routinely share lists of current and potential awardees to consider both the current and planned dispersion and purposes of all DOJ grant funding before finalizing new award decisions. Our work found instances where DOJ made multiple grant awards to applicants for the same or similar purposes without being aware of the potential for unnecessary duplication or whether funding from multiple streams was warranted. We also reported that OJP, OVW, and the COPS Office had not established policies and procedures requiring consistent coordination and information sharing among its granting agencies. Further, we found that OJP and OVW used a separate grants management system than the COPS Office, limiting their ability to share information on the funding they have awarded or are preparing to award to a recipient. officials, its mission and grant management processes are different enough to necessitate a separate system. However, OJP officials told us that its system has been and can be modified with minimal investment to accommodate different grant processes. We included some of these related findings in GAO, 2012 Annual Report: Opportunities to Reduce Duplication, Overlap, and Fragmentation, Achieve Savings, and Enhance Revenue, GAO-12-342SP (Washington, D.C.: Feb. 28, 2012). concurred with all eight of our recommendations. Five of the recommendations specifically relate to ways in which DOJ can improve program efficiency and resource management, and these are that DOJ conduct an assessment to better understand the extent to which the department’s grant programs overlap with one another and determine if grant programs may be consolidated; coordinate within and among granting agencies on a consistent basis to review potential or recent grant awards from grant programs that DOJ identifies as overlapping, before awarding grants; require its grant applicants to report all federal grant funding, including all DOJ funding, that they are currently receiving or have recently applied for in their grant applications; provide appropriate OJP, COPS Office, and OVW staff access to both grant management systems; and ensure its comprehensive study of DOJ grant management systems assesses the feasibility, costs, and benefits of moving to a single grants management system, including the steps needed to harmonize DOJ grant processes, so that any variation in how granting agencies manage their portfolios is not an encumbrance to potential system unification. DOJ has taken steps to partially address these recommendations. Specifically, DOJ has formed an assessment team, composed of OJP, OVW, and COPS Office representatives, to review all of the department’s fiscal year 2012 grant program solicitations, or announcements, and categorize them by several elements. These elements include program type, eligible grant funding recipients (e.g., states, localities, tribes, and law enforcement agencies), target grant award beneficiaries (e.g., victims and juveniles), allowable uses of the funds, and locations funded. The assessment team is also developing criteria to identify potentially duplicative programs and then plans to assign risk levels of potential duplication to those that have multiple solicitations addressing similar key components. According to DOJ officials, the assessment team plans to conclude its work later in 2013. In addition, OJP has granted read-only access of its grants management system to OVW and the COPS Office to allow pertinent staff in those offices to access the most up-to-date OJP grant information. Further, OJP officials said that they are exploring ways in which more data systems may be used for coordinating grants. DOJ officials anticipate that eventually, agencies can leverage the information in these systems during the preaward process to avoid funding potentially overlapping and duplicative grant activities; however, DOJ’s plans rest upon completion of the assessment team’s work. Officials told us that upon receipt of the assessment team’s findings, they plan to work to develop and support a targeted and strategic approach to reviewing applications across all three granting agencies before making grant award decisions. DOJ officials noted that as part of this approach, DOJ plans to establish policies and procedures to govern coordination efforts. Thus, completion of this assessment could better position DOJ to take more systemic actions— such as improved coordination and potential consolidation of its programs—to limit overlap and mitigate the risk of unnecessary duplication. DOJ has also initiated a feasibility study of moving to a single grants management system that includes the identification of the steps needed to harmonize grant processes, among other factors such as return on investment. Since this study—like DOJ’s other efforts to address all of our recommendations—is still under way, it is too soon to tell whether the department’s actions will fully address each of the recommendations. We have also previously reported on and made recommendations related to DOJ’s BVP grant program. In February 2012, we reported that DOJ had designed several controls for the BVP program to ensure grantee compliance with program requirements, among other things, but could take additional action to further reduce management risk. For example, we found that from fiscal years 2002 to 2009, the BVP program had awarded about $27 million in BVP grants to grant recipients who did not ultimately seek reimbursement. Since the grant terms for each of these grantees had ended, the grantees were no longer eligible for reimbursement and DOJ could deobligate these funds. To improve DOJ’s resource management, we recommended that DOJ deobligate undisbursed funds from grants in the BVP program whose terms have ended. Further, we noted that since the BVP program received about $24 million in fiscal year 2012, deobligating this $27 million could have significant benefits. For example, deobligating this funding could enable the department to apply the amounts to new awards or reduce requests for future budgets. The department concurred with this recommendation and has since deobligated $2 million. In early April 2013, DOJ officials stated that they expect to complete the deobligation process before the end of April 2013. They also said the process is time-intensive because it has involved reconciliation among multiple data and financial management systems. DOJ officials stated that they plan to use the deobligated funds toward fiscal year 2014 BVP awards. In September 2012, we found that DOJ and Treasury had made limited progress to consolidate their asset forfeiture property management activities. Specifically, the departments had made limited progress in sharing storage facilities or contracts, and they had not fully explored the possibility of coordinating or consolidating the management of their assets to achieve greater efficiencies, effectiveness, and cost savings. As a result, each department maintained separate asset-tracking systems, separate contracts, and separate storage facilities, which we found to be potentially duplicative. For example, DOJ and Treasury maintain four separate asset-tracking systems—DOJ maintains one system and Treasury maintains three—to support their respective asset forfeiture program activities, and these four tracking systems have similar functionalities.developing, maintaining, and overseeing their four asset-tracking systems in fiscal year 2011 totaled $16.2 million for DOJ’s asset-tracking system and $10.4 million for the three Treasury asset-tracking systems combined. Further, we found that in some cases, storage facilities are located in the same geographic area. For example, both the U.S. Marshals Service—the primary custodian of DOJ’s seized assets—and According to DOJ and Treasury data, the cost of Treasury maintain vehicle storage facilities, 40 percent of which are within 20 miles of each other. DOJ and Treasury officials noted that when Congress passed a law establishing the Treasury Forfeiture Fund in 1992, it recognized the differences in the programs’ missions, which warranted creating separate programs, and this encouraged independent operational decisions that eventually created additional differences between the two programs. Both programs are designed to reduce and prevent crime. DOJ’s asset forfeiture program represents the interests of law enforcement components within its department as well as several components outside the department, while Treasury’s program represents the interests of We recognized the separate legal Treasury and DHS components.authorities of the two funds, but noted that those legal authorities did not preclude enhanced coordination within programs. Thus, we recommended that DOJ and Treasury conduct a study to determine the feasibility of consolidating potentially duplicative asset management activities including, but not limited to, the use of asset-tracking systems and the sharing of vendor and contract resources. The departments concurred with this recommendation. As of March 2013, DOJ officials reported that DOJ and Treasury representatives had met several times in the fall of 2012 and thereafter agreed upon an approach to conduct the study and assess potential costs. DOJ officials noted that they would continue to meet with their Treasury partners to execute their plan. Since work remains under way, it is too soon to tell whether the departments’ actions will fully address the recommendation. In July 2012, we reported on the growth of revenues and expenses in DOJ’s AFF from fiscal years 2003 to 2011, and the need for transparency in DOJ’s process for carrying over funds from one fiscal year to the next. Each year, DOJ earns revenue from the proceeds of the forfeited assets it collects. It then pays its expenses, which include payments to victims and the costs of storing and maintaining forfeited assets. DOJ uses any balance to help cover anticipated expenses in the next fiscal year that may not be covered by that year’s revenues, and this is known as carrying over funds. For example, at the end of fiscal year 2003, DOJ carried over approximately $365 million to cover expenditures in the next fiscal year. In contrast, at the end of fiscal year 2011, DOJ carried over $844 million to cover expenses into fiscal year 2012. After DOJ reserves funds to cover needed expenses, DOJ declares any remaining funds to be an excess unobligated balance and has the authority to use these funds for any of the department’s authorized purposes. In recent years, DOJ also used these excess unobligated balances to cover rescissions. For example, in fiscal year 2011, DOJ used excess unobligated balances to help cover a $495 million AFF program rescission. Also, in fiscal year 2012, DOJ used $151 million of the remaining AFF funds identified at the end of the fiscal year to acquire the Thomson Correctional Center in Thomson, Illinois. At the time of our review, when determining the amounts to carryover, DOJ officials reviewed historical data on past program expenditures, analyzed known future expenses such as salaries and contracts, and estimated the costs of any potential new expenditures. However, as we concluded on the basis of our findings in July 2012, without a clearly documented and transparent process, it was difficult to determine whether DOJ’s conclusions regarding the amounts that need to be carried over each year were well founded. We recommended that DOJ clearly document how it determined the amount of funds that it would need to be carried over for the next fiscal year, a recommendation with which DOJ concurred. DOJ officials stated that they plan to include information on the basis for its decisions concerning the amount of funds to be carried over in future Congressional Budget Justifications, but as of March 2013, the decision on how to present the information was still pending. Since this information has not yet been made available, it is too soon to tell whether it will fully address the recommendation. In April 2013, we identified overlap in some activities of five types of field- based information-sharing entities and concluded that DOJ, DHS, and ONDCP could improve coordination among the entities to help reduce unnecessary overlap in activities. In general, the five types of entities in our review were established under different authorities and have distinct missions, roles, and responsibilities. We reviewed their activities in eight urban areas and found overlap as each carried out its respective missions, roles, and responsibilities. Specifically, we identified 91 instances of overlap in analytical activities and services, with more instances of overlap involving a fusion center and a Field Intelligence Group (54 of the 91 instances) compared with the other three types of entities. For example, we found that in five of the eight urban areas, the fusion center, Regional Information Sharing Systems center, and the Field Intelligence Group disseminated information on all crimes—which can include terrorism and other high-risk threats as well as other types of crimes—for federal, state, and local customers including state and local police departments. In addition, we found 32 instances of overlap in investigative support activities across the eight urban areas reviewed, with more instances of overlap involving a Regional Information Sharing Systems center and a fusion center (18 of the 32 instances) compared with the other three entities. For example in one urban area, the Regional Information Sharing Systems center and the fusion center both conducted tactical analysis, target deconfliction, and event deconfliction within the same mission area for federal, state, and local customers. We reported that overlap, in some cases, can be desirable. In particular, overlap across analytical activities and services can be beneficial if it validates information or allows for competing or complementary analysis. Nevertheless, overlap can also lead to inefficiencies if, for example, it burdens law enforcement customers with redundant information. To promote coordination, we recommended two actions. First, we recommended that the Attorney General, the Secretary of Homeland Security, and the Director of ONDCP collaborate to develop a mechanism that would allow them to hold field-based information-sharing entities accountable for coordinating and monitor and evaluate the coordination results achieved. Second, we recommended that the Attorney General, the Secretary of Homeland Security, and the Director of ONDCP work together to assess opportunities where practices that enhance coordination can be further applied. DHS and ONDCP concurred with both recommendations. DOJ generally concurred with both recommendations, but asserted that it was already actively promoting coordination and routinely seeking to identify efficiency gains. For example, DOJ cited its participation in summits with other agencies, including DHS, and the colocation of certain field-based entities as evidence in support of this. While these efforts are positive steps for sharing information and coordinating, we noted and continue to believe that they do not fully address the recommendations. We maintain that an accountability mechanism to ensure coordination could add valuable context to any existing interagency discussions while encouraging entities to engage in coordination activities, such as leveraging resources to avoid unnecessary overlap. Further, our recommendation calls for DOJ, DHS, and ONDCP to collectively assess opportunities to enhance coordination through whatever effective means they identify. Chairman Sensenbrenner, Ranking Member Scott, and members of the subcommittee, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. For further information about this statement, please contact David C. Maurer, Director, Homeland Security and Justice Issues, at (202) 512- 9627 or maurerd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. In addition to the contact named above, the following individuals also made contributions to this testimony: Joy Booth, Assistant Director; Sylvia Bascope; Michele Fejfar; Heather May; Lara Miklozek; Linda Miller; and Janet Temko. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | In fiscal year 2012, DOJ's $27 billion budget funded a broad array of national security, law enforcement, and criminal justice system activities. GAO has examined a number of key programs where DOJ has sole responsibility or works with other departments and recommended actions to improve program efficiency and resource management. This statement summarizes findings and recommendations from recent GAO work in the following five areas: (1) overlap and potential duplication in DOJ grant programs; (2) DOJ's management of undisbursed funds from BVP grant awards whose terms have ended; (3) potential duplication in DOJ and Treasury asset forfeiture programs; (4) DOJ's management of asset forfeiture funds; and (5) overlap among DOJ and other federally funded field-based information sharing entities. This statement is based on prior products GAO issued from February 2012 through April 2013, along with selected updates obtained from April 2012 through April 2013. For the selected updates on DOJ's progress in implementing recommendations, GAO analyzed information provided by DOJ officials on taken and planned actions. In July 2012, GAO reported that the Department of Justice's (DOJ) more than 200 grant programs overlapped across 10 key justice areas, and that this overlap contributed to the risk of unnecessarily duplicative grant awards for the same or similar purposes. GAO has recommended, among other steps, that DOJ conduct an assessment to better understand the extent of grant program overlap and determine if consolidation is possible. DOJ has begun taking related actions, but it is too early to assess their impact. In February 2012, GAO reported that DOJ's Bulletproof Vest Partnership (BVP) Program--a source of funding for law enforcement ballistic- and stab-resistant body armor--had not taken steps to deobligate about $27 million in unused funds from grant awards whose terms had ended. GAO recommended that DOJ deobligate these funds and, for example, apply the amounts to new awards or reduce requests for future budgets. DOJ officials have since deobligated $2 million and plan to deobligate the rest by the end of April 2013. DOJ officials plan to apply the funds toward fiscal year 2014 BVP grants. In September 2012, GAO reported that DOJ and the Department of the Treasury (Treasury) conducted potentially duplicative asset management activities related to the seizure and forfeiture of assets associated with federal crimes. For example, GAO reported that each agency maintains separate tracking systems for seized and forfeited property. GAO recommended that DOJ and Treasury conduct a study to determine the feasibility of consolidating their asset management activities. In March 2013, DOJ officials reported that DOJ and Treasury had agreed upon an approach to conduct the study and assess potential costs, but that meetings between the departments were still ongoing and the study had not been finalized. In July 2012, GAO reported that annual revenues from DOJ's Assets Forfeiture Fund exceeded annual expenditures, allowing DOJ to carryover $844 million at the end of fiscal year 2011, in part to reserve funds for the next fiscal year. However, DOJ does not clearly document how it determines the amounts that need to be carried over. GAO recommended that DOJ more clearly document how it determines the carryover amounts. DOJ officials reported that they plan to provide this information, but as of March 2013, had not yet determined how to present the information. In April 2013, GAO reported on overlap in activities and services across field-based entities operated or supported by DOJ, the Department of Homeland Security, and the Office of National Drug Control Policy that may share terrorism-related information, among other things. GAO identified 91 instances of overlap in some analytical activities, such as disseminating information on similar issue areas, such as terrorism. GAO recommended, in part, that the federal agencies collaborate to hold the entities accountable for coordination and assess where practices that enhance coordination could be applied. DOJ generally agreed with the intent of the recommendations, but stated that DOJ has already taken steps to promote coordination. The steps, however, do not establish an accountability mechanism for monitoring coordination or assessing practices. GAO has made several recommendations to DOJ in prior reports to help improve program efficiency and resource management. DOJ generally concurred with these recommendations and is taking actions to address them. |
Neither industry data nor data collected by federal regulators document the extent to which banks have invested trust assets in proprietary mutual funds. However, we were able to gain some insight into this question by comparing several data sources. Some relevant statistical information was available from the reports on trust assets that banks must submit annually to federal regulators and from mutual fund data compiled by Lipper Analytical Services, a private firm. To understand the types of trust assets that might be converted into mutual funds, we talked to federal bank regulators at the Office of the Comptroller of the Currency (OCC), the Federal Reserve, and the Federal Deposit Insurance Corporation (FDIC). We also interviewed trust department officials at 10 banks to understand the extent to which banks have invested trust assets in mutual funds. The banks had trust departments ranging from several billion to hundreds of billions of dollars in trust assets. These banks were chosen on a judgment basis and do not represent a statistically valid sample. To address questions about disclosure, consent, fees, and controls over banks acting in their self-interest when investing trust assets in proprietary mutual funds, we (1) reviewed relevant legislation, (2) talked to the federal bank regulators mentioned above, (3) reviewed 13 examination reports of trust departments, and (4) talked to officials at the Department of Labor and the Securities and Exchange Commission (SEC). The American Bankers Association (ABA) provided us with data from a 1993 survey on the status of state laws governing the investment of trust assets in proprietary mutual funds. We also received comments from the Federal Reserve, OCC, FDIC, and Labor, which we incorporated into the report where applicable. A more detailed discussion of the scope and methodology of our work is presented in appendix I. About 3,000 banks reported in 1992 that they provided fiduciary services to individuals, corporations, and charities (See app. II.) At year-end 1993, these services involved over 11 million accounts with total assets of about $10.6 trillion. Most trust assets were in custodial or other accounts for which the bank did not provide investment management service. About $2 trillion of the $10.6 trillion were in discretionary trust asset accounts (i.e., accounts for which banks provided investment management service). Most assets in these discretionary accounts were managed for employee benefit plans ($914 billion) and personal trusts ($556 billion). In 1993, $760 billion—representing approximately half of the funds contained in discretionary employee benefit plans and personal trusts—were invested in pooled trust investment funds. Discretionary trust assets that were not invested in pooled funds were invested in separate portfolios for each account. Pooled trust investment funds were generally used for relatively small accounts or to achieve diversification in areas that were too costly to achieve on an individual account basis. Pooled trust investment funds are similar to mutual funds in that the cash assets of many trust accounts are commingled in a single investment portfolio for the proportional benefit of all participating accounts. Pooled trust investment funds, like mutual funds, may serve different investment objectives and can be invested in such instruments as short-term treasuries, long-term bonds, growth stocks, and tax-free bonds. However, unlike mutual funds, pooled trust investment funds are not marketed to the general public. Pooled trust investment funds can be transferred to a bank proprietary mutual fund in a single transaction known as a conversion. Pooled trust investment funds for employee benefit accounts and personal trusts differ in some ways. The pooled funds are maintained separately for each type of account because employee benefit plans have different legal requirements. Pooled trust investment funds for personal trusts are called common trust funds. At the end of 1993, two-thirds of the discretionary employee benefit fund trust assets were placed in pooled trust investment funds, while less than one-third of the discretionary assets of personal trusts were placed in common trust funds. (See figs. 1 and 2.) Pooled trust investment funds ($608 billion) Pooled trust investment funds—common trusts ($152 billion) Individually managed funds ($404 billion) Unlike mutual funds, a bank’s operation of a pooled trust investment fund is specifically not covered by the Investment Company Act of 1940 or the Securities Act of 1933. Instead, banks acting as trustees are subject to a substantial body of federal, state, and common laws and regulations in addition to those governing the usual commercial banking activities. Employee benefit plans are governed by ERISA’s fiduciary responsibility provisions, which are administered by Labor. ERISA allows the investment of trust assets in proprietary mutual funds for employee benefit plans. State law as well as the trust agreement governs allowable investments for personal trusts. At least 39 states and the District of Columbia allow personal trust assets to be invested in proprietary mutual funds. We could not determine the status of the remaining 11 states from available information. Two common-law principles underlie the operation and regulation of trusts: the duty of loyalty and prudence. The duty of loyalty requires a fiduciary to act solely in the interests of its clients, excluding all self-interest. Banks are required to resolve any possible conflicts of interest in favor of the trust account and its beneficiaries, not the bank. In addition, the fiduciary is to avoid situations of potential conflict of interest that may prevent it from serving in the best interest of a client. Prudence requires banks that act as fiduciaries to be able to justify the suitability of trust investments for trust accounts. Federal trust regulatory authority stems in part from the fact that the bank regulators have the authority to grant or terminate the trust powers of banks and bank holding companies and their bank and trust subsidiaries. Under 12 U.S.C. 92a, OCC is authorized to grant permission for a national bank to act as a fiduciary and to promulgate regulations governing the proper exercise of fiduciary powers. OCC supervises the trust activities of national banks under regulation 12 C.F.R. part 9. State banks’ trust activities are supervised by the Federal Reserve or FDIC using regulations similar to OCC’s. The Federal Reserve also supervises the trust company subsidiaries of bank holding companies. Federal regulation of common trust funds has been strengthened by an Internal Revenue Code requirement that states banks must adhere to section 9.18 of OCC’s regulations to qualify for certain federal tax benefits. Federal bank regulators allow banks to invest trust assets in securities, including those of a proprietary mutual fund, only if the purchase is permitted by ERISA for employee benefit accounts, or by state law, the trust instrument, or court order for personal trust accounts. Without specific federal or state legislation authorizing the use of proprietary mutual funds in trust accounts, OCC officials said that such use is a breach of trust because of the unauthorized conflict of interest. Labor permits the investment of trust assets in proprietary mutual funds under a set of conditions stated in Prohibited Transactions Exemption (PTE) 77-4. According to the ABA survey, at least 39 states allow bank fiduciaries to invest trust assets in proprietary mutual funds. Regardless of state law, under its Regulation Y, the Federal Reserve prohibits bank fiduciaries from purchasing, in their sole discretion, proprietary mutual funds if the advisor to the fund is a subsidiary of the bank holding company. The Federal Reserve stated that banks that had been criticized under this provision have responded by either restructuring the investment advisor as a subsidiary of the bank, rather than of the bank holding company, or obtained co-trustees who have given consent or used outside counsel to petition the Federal Reserve to reexamine its interpretation of Regulation Y provisions. While it was not possible to quantify the exact extent to which assets in trust accounts had been invested in bank proprietary mutual funds, some inferences could be drawn from the information that was available. For example, while trust assets had been invested in proprietary mutual funds, most of the assets in proprietary mutual funds appeared to have come from other sources. Two, the level of funds contained in discretionary trust accounts far exceeded the amount of assets in proprietary mutual funds. Finally, industry and regulatory officials said that most of the trust assets that had been converted into proprietary mutual funds had come from employee benefit accounts. Bank proprietary mutual funds held about $202 billion in assets at the end of 1993, an increase of $187 billion since 1983. Data were not available to indicate what portion of that total came from each bank’s trust assets. However, even if all the assets in bank proprietary mutual funds came from the investment of trust assets, they would represent only 15 percent of the $1.5 trillion in discretionary employee benefit and personal trust assets. Because bank proprietary mutual funds provide a close alternative investment vehicle to a trust department’s pooled trust investment funds, we examined pooled trust investment funds to determine whether the volume of assets or the number of pooled trust investment funds had declined since the introduction of proprietary mutual funds. Such developments could be expected if pooled trust funds were being converted to proprietary mutual funds in substantial amounts. As figure 3 shows, pooled trust investment funds were still a more widely used investment vehicle for trusts as compared with proprietary mutual funds. From 1983 to 1993, the total of pooled trust investment funds, net of any decreases caused by conversions to proprietary mutual funds, grew from $150 billion to $760 billion. Moreover, at the end of 1993, 95 percent of the assets in pooled trust investment funds were in banks that also offered proprietary mutual funds. Most banks that offered pooled trust investment funds in 1986 continued to do so in 1993. While some banks discontinued pooled funds between 1986 and 1993, other banks increased the number of pooled trust investment funds they offered, and some banks offered pooled funds for the first time. Appendix IV discusses the status and trends in banks that offered both pooled trust investment funds and proprietary mutual funds. According to the annual data collected by federal bank regulators, the concentration of pooled trust investment funds in the largest banks has increased over the last several years. In 1993, the 10 largest banks held 75 percent of all pooled trust investment fund assets, compared with 55 percent of such assets in 1986. We do not know the reason for this change, although large bank and large corporate mergers could be contributing factors. We also do not know what effect this increased concentration has had or will have on the prospects for conversions to mutual funds. Evidence from press accounts and our discussions with bankers and bank regulators indicated that trust assets had been used to establish proprietary mutual funds. Lipper collected data from SEC registration filings of newly organized bank proprietary mutual funds to estimate the volume of trust assets used to start up proprietary mutual funds. Lipper reviewed these data and calculated that from 1985 through 1992, about $24 billion of trust assets were converted to proprietary mutual funds. (See table 1.) Lipper calculated this amount by assuming that newly registered bank proprietary mutual funds that reported a significant amount of assets at inception acquired those assets through the conversion of trust assets. The $24 billion represented about 15 percent of the $161 billion in total assets invested in proprietary mutual funds at the end of 1992. For 1985 to 1990, Lipper estimated that most of the trust asset conversions were into MMMFs. However, in 1991 and 1992 most were conversions into long-term debt and equity mutual funds. Lipper reported that MMMFs accounted for the majority of assets in bank proprietary mutual funds.Thus, conversions of trust assets represented a larger portion of the assets in proprietary long-term debt and equity mutual funds than in MMMFs. These data indicated only the volume of trust assets (which could be from pooled or nonpooled sources) assumed to have been placed in proprietary mutual funds at the start-up of a fund. Industry experts said that these numbers likely understated the amount of trust assets invested in proprietary mutual funds because they did not include the amount of trust assets that had been invested in or converted to mutual funds after the fund’s start-up. At the end of 1993, banks reported that $23 billion in discretionary employee benefit assets and $22 billion in discretionary personal trusts assets were invested in MMMFs. We do not know what portion of these investments were in proprietary MMMFs, but if all of the assets were invested in proprietary mutual funds they would have accounted for only one-third of the assets in proprietary MMMFs. Data on the amount of trust assets invested in long-term debt and equity mutual funds were not separately reported. Bank regulators stated that pooled trust investment funds are a likely source of trust funds for conversion into proprietary mutual funds. We do not know how much of proprietary mutual funds has come from pooled trust investment funds. Because capital gains taxes are deferred on funds invested in employee benefit plans, most trust assets that have been converted into mutual funds—including those in the banks we interviewed—have likely come from pooled employee benefit funds. A few banks have converted common trust funds into proprietary mutual funds, but none of the bankers we spoke with had done so. However, these bankers said they believed many more conversions of common trust funds would occur if tax laws were changed to clearly allow a tax-deferred transfer of common trust fund assets to mutual funds. Congress passed clarifying legislation that would have allowed tax-deferred conversions as part of other legislation in 1992, but the President vetoed the bill. Bankers told us that many trust customers prefer to have their accounts invested in a mutual fund rather than a pooled trust investment fund. These bankers said that mutual funds, unlike pooled funds, have become widely accepted by the general public as an investment vehicle. Also, they said that the services routinely offered by mutual funds are in some respect superior to pooled trust investment funds. For example, mutual funds are priced at market daily, rather than monthly or quarterly, and the funds’ performance can be followed in the daily press; money can be invested or withdrawn daily, rather than monthly or distributions can, if desired by the customer, be paid out in shares of the mutual fund, rather than only in cash. These bankers said that mutual funds may also provide a wider range of investment choices than pooled trust investment funds. For example, a proprietary mutual fund might attract enough trust and nontrust investors to justify specialized funds in such areas as small companies or foreign investments. However, the trust customers alone may not provide a large enough base to support the overhead and transactions costs of running these funds. Federal laws and some state laws have established various requirements relating to the disclosure of and consent for investment of trust assets in proprietary mutual funds. Under conditions set out in PTE 77-4, Labor requires disclosure of the investment of employee benefit assets in proprietary mutual funds to a second, independent fiduciary as well as the consent to such an investment by the independent fiduciary. PTE 77-4 does not require that Labor or the bank regulators be notified or provide advance approval. Instead, PTE 77-4 establishes bank procedures regarding (1) what must be disclosed and to whom, (2) fees, and (3) conflicts of interest. Provision of the mutual fund prospectus, disclosure of fees paid by the employee benefit plan, and an explanation of why the investment is appropriate must be made to a second fiduciary, who is to be chosen by the plan and independent of the bank. The second fiduciary must give consent before the conversion. Labor specifies that consent may be limited to fees to be charged. Of those banks we visited that had converted employee benefit plan assets into proprietary mutual funds, all had relied on PTE 77-4. These banks sent disclosure letters and consent forms to the independent fiduciaries of the plans. A sample of the letters revealed that these banks discussed the benefits of investing in the proprietary mutual funds and disclosed their relationship to the funds and the fees that would be charged. Several letters also discussed how funds in the accounts would be handled if the fiduciaries did not agree to the conversion. In only one case did a bank fail to obtain positive consent before the conversion took place. In an examination report, federal regulators cited the bank for failure to obtain positive consent in a timely manner. For investing the assets of personal trust accounts in a proprietary mutual fund, the issue of disclosure concerns whether beneficiaries are notified about the fees charged for such investments. According to the ABA survey, disclosure requirements vary, with about half of the 39 respondent states saying they required fee disclosure to trust customers. In implementing the conversion of a common trust fund into a proprietary mutual fund, some of the bankers we interviewed said they would send disclosure notices to current income beneficiaries even if the law or trust agreement did not require it. We do not know whether these policies are representative of the industry. For personal trusts, the ABA survey reported that beneficiary consent is not required in almost all of the 39 states that responded. Federal bank regulators said that requiring consent could be a problem in some personal trusts where some future beneficiaries may not even be born. New trust agreements are often written to allow investment of trust assets in proprietary mutual funds. Trust departments normally charge a fee for managing investments in a trust account, including a fee for providing investment advice. Similarly, the investment advisor to a mutual fund, including bank proprietary funds, normally charges a fee for investment advice, which is paid out of the assets of the fund. Thus, by investing trust assets in proprietary mutual funds, a bank creates a possibility of collecting two fees for investment advice (i.e., collecting both trust fees and mutual fund fees on the same trust assets). This practice is generally referred to as charging double fees. Besides paying the fee for investment advice, mutual funds may also pay fees for distribution, custody, and other services. For a trust customer, total fees could be greater for trust assets invested in proprietary mutual funds than if the trust assets were otherwise invested. However, banks’ practices in charging fees when investing trust assets in proprietary mutual funds are governed by federal and some state laws, and other factors also influence those practices. The laws governing fees charged for investment advice differ for employee benefit plans and personal trust accounts. For employee benefit plans, PTE 77-4 prohibits paying double fees for investment advice. For personal trust accounts, the states that allow investment of trust assets in proprietary mutual funds differ regarding permissible fees. Of the states that responded to the ABA survey, 8 prohibited charging both a trust and a mutual fund investment advisory fee, 27 permitted both fees to be charged, and 4 were silent about whether both fees may be charged. We did not have data on the remaining 11 states. Data were not available to determine whether banks were more likely to invest trust assets in proprietary mutual funds when double fees were permitted by state law. Moreover, even if data had been available, we do not know if we could have isolated other factors—such as the performance of the fund and its suitability for the investment goals of the beneficiaries of the trust accounts—that influenced the decision of whether to invest trust assets in proprietary mutual funds. Factors other than federal and state laws also influence banks’ fees when investing trust assets in proprietary mutual funds. According to bankers we interviewed, competitive forces often keep bank fees lower than the amount that could be charged under law for trust assets invested in proprietary mutual funds. They said this is especially true with employee benefit accounts and revocable personal trusts, since customers may move accounts elsewhere if fees are not competitive. Regulatory examinations of bank trust departments may also influence banks’ practices in charging fees because investments in proprietary mutual funds and the fees associated with them are susceptible to regulatory criticism and action based on common law. Bank regulatory officials told us that fees collected by banks for investing trust assets in proprietary mutual funds are a matter of concern. OCC stated that even if the investment of trust assets in a proprietary mutual fund is authorized by state law, the terms of the trust instrument, or the consent of all beneficiaries, regulators still require banks to meet common law standards of prudence and loyalty and the regulation 12 C.F.R. part 9 requirement that fees be reasonable. Thus, investments in proprietary mutual funds and the fees associated with them become susceptible to regulatory criticism and action. Beneficiary lawsuits regarding poor investment performance or fee abuse are another factor that may influence banks’ practices regarding fee charges. Bankers we interviewed said that their banks do not charge, nor would they charge, two investment advisory fees on trust assets invested in proprietary mutual funds. We were not able to determine whether or to what extent total fees would be higher on such investments than if they were otherwise invested. Most of the bankers we interviewed said they preferred that their proprietary mutual funds charge the same fees to trust and other investors. These banks arranged for their trust departments to refund mutual fund advisory and sometimes other mutual fund fees to trust accounts. One bank, however, created a separate class of mutual fund shares so that it could waive its mutual fund fees directly. Among the banks we visited that adjusted fees through their trust departments, some bankers said they itemized their trust fees into various categories of service while others charged a single comprehensive trust fee, usually based on average asset values in the account. For those banks that have broken down their trust fees, most of the bankers said they waived either the trust or the mutual fund investment advisory fee but continued to charge a trust fee for nonadvisory services. Of the banks we visited that charged a single trust fee, one banker said the bank planned to waive its entire trust fee on those assets invested in mutual funds. Other bankers said that their banks rebated portions of the mutual fund fees, which might have included other fees besides the investment advisory fee that their bank or affiliates received from the mutual fund. As we pointed out earlier in this report, regulations and laws governing trusts generally prohibit banks acting as fiduciaries from serving in their own interest when that interest conflicts with the interest of a trust. A conflict of interest arises for a banking organization when it invests trust assets in its proprietary mutual fund. As we have noted, the bank regulators address this conflict through regulation. For example, section 9.12 of OCC’s regulation 12 C.F.R. part 9 establishes local law as the applicable standard of permissibility for investments involving a conflict of interest. In addition, their examination of a bank’s trust activities is intended, among other things, to identify and resolve this type of conflict on a case-by-case basis. A description of federal oversight authority and trust examination programs is provided in appendix V. The federal bank regulators maintain examination manuals to test for compliance with trust laws and regulations. These manuals provide guidance to test for conflicts of interest, including conflicts involving the investment of trust assets in proprietary mutual funds. However, our review of these manuals indicated that the criteria for evaluating investments in proprietary mutual funds are rather general in nature. For example, the Federal Reserve manual, in its section on conflicts of interest, does not specifically address this issue. Also, OCC has drafted, but not yet adopted, a special set of examination procedures to cover trust investments in proprietary mutual funds. We reviewed trust examination reports on selected banks, some of which had converted trust assets into proprietary mutual funds. The reports contained limited information regarding recent conversions. Given the limited number of examinations we reviewed and the general nature of the trust examination manuals, we have no basis for judging the effectiveness of trust examinations in detecting and controlling unresolved conflicts of interest when investing trust assets in proprietary mutual funds. In addition, most proprietary mutual funds are new, particularly the long-term debt and equity funds. Because it takes several years for a fund to develop a meaningful performance record, it is probably too soon to evaluate the choice of these funds as an investment vehicle for trust assets. In our limited review, we noted that Regulators have recognized that the investment of trust assets in proprietary mutual funds poses important regulatory issues for trusts. Examiners are to direct attention to such issues as double fees in trust examinations. These issues could become more significant if such investments continue. Little documentation of the review of trust asset conversions existed in the trust examination reports we reviewed. To gain some appreciation of how the general guidelines are applied in examinations, we reviewed examination reports for 13 trust departments. About half of these departments had converted trust assets to proprietary mutual funds. In one case, OCC found that the bank had failed to get appropriate authorizations from independent fiduciaries for the conversions. OCC indicated that corrective action had been taken. In another case, FDIC questioned the fees that trust customers were being charged on investments in proprietary mutual funds. FDIC indicated that a conflict of interest existed, which management should address in writing. For each of the banks that had trust conversions, there was little documentation of detailed examiner review of the transactions. A federal regulator may not be aware of a conversion until long after the conversion has occurred. Banks are not required to provide regulators with advance notification of trust conversions to proprietary mutual funds. Because trust examinations can be 2 or more years apart, a federal regulator may not be aware of a conversion until long after the fact. The conversion of pooled trust assets typically has been reviewed after the fact as part of the regulator’s periodic trust examination. In the period 1991 to 1993, one violation relating to a conversion of employee benefit funds was referred to Labor. Regulators are to refer matters of noncompliance relating to employee benefit accounts to Labor. For the years 1991 to 1993, bank regulators stated that they had noted few problems with the investment of trust assets in proprietary mutual funds. Only one violation relating to a conversion of employee benefit funds had been referred to Labor, where action is pending. We requested comments on a draft of this report from the Federal Reserve, OCC, FDIC, and Labor. OCC, FDIC, and Labor provided written comments, which appear in appendixes VI through VIII, respectively. The Federal Reserve declined to provide written comments. Each of the agencies also provided us with technical comments, which have been incorporated where appropriate. OCC and FDIC indicated that they generally concur with the report’s observations. FDIC noted, however, that the benefits of investing in a mutual fund as opposed to a pooled trust investment fund (as appears on p. 22) should be clarified. It noted that these benefits could be provided by pooled trust investment funds as well as mutual funds. Labor had no specific comments but provided us with an update, which we incorporated, on the referral mentioned on p. 34 of the report. Labor also said that a referral that was being considered by the Federal Reserve has since been determined not to be a violation. We are sending copies of this report to the Comptroller of the Currency, the Chairman of the Board of Governors of the Federal Reserve System, the Chairman of the FDIC, the Secretary of Labor, and other interested parties. We will also make copies available to others upon request. This report was prepared under the direction of Thomas J. McCool, Associate Director, and Stephen C. Swaim, Assistant Director, Financial Institutions and Markets Issues. Other major contributors are listed in appendix IX. If you have any questions, please call me on (202) 512-8678. To determine the extent to which trust assets had been invested in proprietary mutual funds, we reviewed editions of Trust Assets of Financial Institutions. This report is issued annually by the Federal Financial Institutions Examination Council, and it contains the most detailed data available regarding the investment of trust assets by banking institutions. We also reviewed statistical data on bank proprietary mutual funds provided to us by Lipper Analytical Services. Although the data on trust assets and the mutual funds data did not document the extent to which banks have invested trust assets in proprietary mutual funds, we were able to gain some insight into this question by comparing these data sources. However, there are some problems with these data, which we have noted below. Because of the format used to collect data in Trust Assets of Financial Institutions, we could not precisely determine the extent to which trust assets were invested in mutual funds, either proprietary or nonproprietary. In addition, we encountered other problems with the data. For example, trust assets may have been double counted whenever the trust department managed a proprietary mutual fund. Also, regulators and bankers told us that the instructions for completing the report, which are complex, may result in inconsistent reporting by the banks. Finally, the 1993 data provided to us were preliminary. The Lipper data provided information regarding the growth of proprietary mutual funds, but the source of these assets (i.e., whether from pooled trust investment funds, other trusts, or nontrust sources) was not tracked. However, Lipper provided an estimate of the amount of all types of trust assets that were invested at the start-up of a proprietary mutual fund, which Lipper identified from SEC registration filings. We determined that, for purposes of this report, only those trust assets for which banks provided some degree of investment advice were relevant. Furthermore, from our discussions with bankers and federal bank regulators, we determined that the most likely source of trust assets for conversion into proprietary mutual funds were those that were invested in pooled trust investment funds (since these funds are very similar to mutual funds and would provide a relatively large asset base). Therefore, we narrowed our focus further to those assets in pooled trust investment funds. To determine disclosure and consent requirements and whether double fees are permitted, we reviewed the federal laws and regulations that are generally relevant to the investment of trust assets and in particular to the investment of such assets in proprietary mutual funds. We interviewed officials of the federal bank regulators—the Office of the Comptroller of the Currency (OCC), the Federal Reserve Board, and the Federal Deposit Insurance Corporation (FDIC)—who oversee bank trust departments. We also interviewed officials at the Department of Labor who regulate the operation of employee benefit plans in accordance with the Employee Retirement Income Security Act of 1974 (ERISA) and related Labor regulations. Such plans are significant sources of trust assets. Labor oversight includes the investment of employee benefit plan assets in mutual funds. We interviewed Securities and Exchange Commission (SEC) officials to determine the nature of their oversight of mutual funds in connection with trust assets. OCC, Federal Reserve, and FDIC officials provided the manuals used in trust department examinations and described how their examination programs test for compliance with the applicable laws and regulations. They also provided their views regarding potential fiduciary conflicts of interest, fees, disclosure, and customer consent when trust assets are invested in proprietary mutual funds. We reviewed the trust examination reports for 13 banks that were identified as operating proprietary mutual funds; about half of them had converted trust assets into proprietary mutual funds. We obtained additional information from other sources. The American Bankers Association (ABA) provided us a survey regarding the status, as of November 1993, of the laws they identified as pertaining to the investment of trust assets in proprietary mutual funds in 39 states. We interviewed officials of 10 banks where trust departments managed pooled trust investment funds and had either converted such investments into proprietary mutual funds or had contemplated doing so. We selected these banks on a judgment basis to reflect a variety of trust activity and because each was familiar with the issues involved in the conversion of trust assets into proprietary mutual funds. The trust assets in these banks ranged from several billion to hundreds of billions of dollars. In 1993, three of these banks were among the 20 largest managers of pooled trust investment funds. One bank had proprietary mutual funds that were among the 10 largest such proprietary funds offered by banks. We were interested in each bank’s policies and experience, and also in their officers’ views about the prospects for the investment of trust assets in proprietary mutual funds. Specifically, we asked them why mutual funds were becoming a more commonly used investment choice and how they had dealt with issues of disclosure, consent, and fees charged in connection with trust asset investment in proprietary mutual funds. Data that would fully describe these activities were not available, and we did not independently verify the information we obtained. Because we interviewed only a limited number of banks, we do not know if the practices described in this report reflect those of the industry as a whole, and our results are not statistically valid. Our work was done in Washington, D.C., between December 1993 and April 1994 in accordance with generally accepted government auditing standards. We obtained written comments on a draft of this report from OCC, FDIC, and Labor. These comments are discussed on pp. 20 and 21 and reproduced in appendixes VI through VIII. At the end of 1993, available data show that banks held $10.6 trillion of assets in trust. Of this total, $2 trillion were classified as discretionary trust assets. For these assets, banks are to provide investment management services. Such services can range from the bank simply giving advice to an outside party that has sole authority to make investment decisions to the bank itself having sole authority to direct investments. The remaining assets, $8.6 trillion, were classified as nondiscretionary. For these assets, banks simply provide a variety of investment support services, such as custody of securities, dividend collections and distributions, and record keeping. The focus of this report is on the discretionary trust assets because banks are to provide investment management services for them. Discretionary trust assets may be divided into three categories: (1) employee benefit accounts, (2) personal trust accounts, and (3) other accounts. In 1993, almost half of the discretionary trust assets, $914 billion, were held in employee benefit accounts; another quarter, $556 billion, were held in personal trusts; the remaining $577 billion were held in other accounts, which may include the assets of proprietary mutual funds. Figure II.1 summarizes the types of trust assets held by banks as of December 1993. Pooled trust investment funds are similar in concept to mutual funds, but unlike mutual funds, which may be offered to the general public, pooled trust investment funds are not marketed to the general public. Of the $2 trillion in discretionary trust assets in 1993, $760 billion were invested in pooled trust investment funds composed of pooled employee benefit accounts and pooled personal trust accounts. Separate pools are maintained for these accounts because employee benefit plans have deferred tax status whereas personal trusts generally do not. Pooled personal trust accounts are generally referred to as common trust funds. (See fig. II.1.) Discretionary trust assets are separated into three groups, and for two of these groups—employee benefit plans and personal trusts—the assets are further divided into pooled trust investment funds and individually managed employee benefit plans and personal trust accounts. Industry officials and regulators we spoke with said that pooled trust investment funds are a likely source of trust assets for conversion into proprietary mutual funds because they are very similar to mutual funds. Accounts that are not invested in pooled trust funds are referred to as individually managed trust accounts. Some of the bankers we interviewed said that individually managed accounts are less likely to be converted into mutual fund investments because of customer preference. Of the $914 billion in employee benefit accounts in 1993, $608 billion, or 67 percent, were invested in pooled funds. By contrast, only $152 billion of the $556 billion in personal trust accounts, or 27 percent, were invested in pooled funds (i.e., common trusts). The concentration of pooled employee benefit funds in a few large banks was also greater than that of common trust funds. The five banks with the largest amount of pooled employee benefit funds controlled $460 billion, or 76 percent, of the total assets in these pools. The five banks with the largest amount of common trust funds held $46 billion, or only 30 percent of the total assets in these pools. The significance of this concentration is that the scale of future conversions of trust assets into proprietary mutual funds may depend to a great extent on decisions made by a small number of banks. Pooled trust investment funds provide a large potential source of assets for future conversion to proprietary mutual funds. The number of banks offering one or both types of pooled trust investment funds, however, declined from 895 in 1983 to 533 at the end of 1992. It is possible that bank mergers account for a significant part of this decline. In other cases, banks could have either converted their pooled trust investment funds to proprietary mutual funds or liquidated their pooled funds and invested the proceeds elsewhere. Table III.1 shows the status of pooled trust investment funds and proprietary mutual funds for 1983, 1988, and 1993. On the basis of data presented in this table, we noted the following changes: The assets in employee benefit and common trust funds grew by about 400 percent from 1983 to 1993, to $760 billion, with employee benefit funds growing faster than common trusts. The number of separate pooled trust investment funds declined from more than 4,000 in 1983 to about 3,500 in 1993. Assets in the pooled trust investment funds in 1993 were nearly four times as large as assets in proprietary mutual funds, notwithstanding the fact that the banks’ proprietary mutual funds grew from relative insignificance to more than $200 billion in just 10 years. Pooled trust investment funds increased by $610 billion while proprietary mutual funds increased by $187 billion. We do not know how much pooled trust investment funds would have grown if proprietary mutual funds had not been available. At the end of 1993, about five times as many banks maintained pooled trust investment funds as offered proprietary mutual funds. Data on the number of institutions offering proprietary mutual funds in 1983 were not available. Table III.1 indicates that pooled trust investment funds have remained an important investment vehicle, despite the smaller number of banks offering them and the start-up of many new proprietary mutual funds. Although customer preference and the relative performance of pooled trust investment funds compared with mutual funds will be important factors in determining whether there are additional conversions of pooled trust investment funds to proprietary mutual funds, several other factors may have an impact on this process. These factors include (1) the profitability of continuing to offer both pooled trust investment funds and proprietary mutual funds, especially since proprietary mutual funds can be offered to a wider range of customers; (2) changes in federal tax laws to allow capital gains accrued in common trusts to be deferred in a conversion; and (3) changes in federal regulation, such as that which has been proposed by OCC for a number of years to allow banks to advertise the performance of their pooled trust investment funds. Out of the more than 3,000 banks active in the trust business, about 500 offered pooled trust investment funds at year-end 1992. At the end of 1993, 91 of the banks offering pooled trust investment funds offered proprietary mutual funds as well. As table IV.1 shows, these 91 banks, which we will call dual providers, dominated both markets. They held 95 percent, $719 billion, of all reported pooled trust investment fund assets and 94 percent, $191 billion, of all reported proprietary mutual fund assets. Table IV.1 also shows that the number of separate pooled trust investment funds offered by dual providers had declined since 1988 while asset growth had been substantial. In about half of these banks, the number of separate pooled trust investment funds decreased, although fewer than one-fourth of the 91 banks reported a decline in the assets of pooled funds over that period. Such declines could reflect different events, such as conversions of trust assets into proprietary or other mutual funds, bank mergers, or trouble at the bank. The decline in the number of pooled trust investment funds compared to the increase in the number of proprietary mutual funds offered by the dual providers is an interesting development. However, the extent of the shift toward proprietary mutual funds needs to be kept in perspective. Most pooled trust investment funds have not been converted, and in fact their assets are continuing to grow. One indication of the continuing importance of pooled funds is that only 17 of the banks that offered proprietary mutual funds in 1993 had actually discontinued either their common trust funds or their pooled employee benefit funds since 1986, and only 5 had discontinued both. While a huge potential for future conversions may exist, particularly for employee benefit funds, we have no basis for predicting the extent of future conversions. In reviewing banks that had pooled trust investment funds, we noted a high and increasing concentration of pooled trust assets. At the end of 1993, the 10 largest banks, measured by their pooled trust investment funds, held 75 percent of all pooled trust investment fund assets. In 1986, the 10 largest banks had only 55 percent of this market. Since 1986, assets in pooled trust investment funds in these banks have grown almost eight times faster than pooled trust assets in smaller banks. The increase in concentration of these funds among the 10 largest banks indicated that despite the large volume of pooled trust assets managed by banks, many banks are unlikely to reach the scale of assets needed to make a mutual fund viable simply by converting trust assets into a proprietary mutual fund. The average size of pooled trust investment funds has also grown. This increase in size may make it easier for pooled funds to realize any available economies of scale and may lessen the possibility that pooled trust investment funds would be converted into proprietary mutual funds. The laws and regulations relating to trusts are enforced by OCC, the Federal Reserve, and FDIC. Their oversight authority stems in part from the fact that they are authorized to grant or terminate the trust powers of banks and bank holding companies and their bank and trust subsidiaries. The regulators say that they have the tools necessary to ensure that banks are meeting their trust-related fiduciary responsibilities. Each agency maintains a trust examination manual for the guidance of examiners. We reviewed these manuals and found that they reflected the regulations issued by each agency. Bank examiners must determine if the bank has resolved any conflicts of interests in favor of the trust account and its beneficiaries. They are also to determine if the bank can justify all trust investments by documented analysis of their historic performance and suitability for the trust involved. Regulatory guidelines and policies require examiners making these determinations to exercise a great deal of judgment in analyzing investment decisions and other aspects of trust activities. Each of the federal regulatory agencies has a relatively small number of bank examiners specializing in trust activities, although each of these agencies expects all bank examiners to review trust activities, if any, in the banks they examine. The Federal Reserve had 65 trust specialists out of a field force of approximately 1,600 examiners; OCC had 79 out of about 3,220; and FDIC had 21 out of 3,259. OCC stated that it usually examines trust activities as part of its compliance program although the examinations are usually conducted separately from other compliance examinations. FDIC and the Federal Reserve conduct separate trust examinations. The frequency of trust examinations depends variously on the size of a bank’s trust department and its earlier rating, as shown in Table V.1. OCC, the Federal Reserve, and FDIC use the Uniform Interagency Trust Rating System in their trust examinations. The system was designed to measure performance and identify problems that warrant correction. Banks are rated on six separate components: (1) supervision and organization; (2) operations, controls, and audits; (3) asset administration; (4) account administration; (5) conflicts of interest and self-dealing; and (6) earnings, volume, trends, and future prospects. A type of pooled trust investment fund maintained by a bank for the collective investment of money held as trustee, executor, administrator, guardian or custodian under a state Uniform Gifts to Minors Act. This principle requires a fiduciary to act solely in the interests of his clients, excluding all self-interest. ERISA defines employee pension benefit plans as those which provide retirement income to employees or result in deferred employee income for a period extending to the termination of covered employment or beyond. A federal statute administered by the Department of Labor, Internal Revenue Service, and Pension Benefit Guaranty Corporation. ERISA regulates the conduct of those charged with administering and investing the assets of privately sponsored employee benefit plans. A person acting alone or jointly with others primarily for the benefit of another. The principal function of a fiduciary is the management of property for others. An organization that advises others as to the value of or advisability of investing in securities. A mutual fund employs an investment advisor to give professional advice on its investments and the management of its assets. A company that issues redeemable securities and is engaged primarily in the business of investing or trading in securities. Mutual funds enable investors to pool their money to obtain professional management and diversification of their investments. A mutual fund must stand ready to buy back its shares at their current net asset value. The value of the shares depends on the market value of the fund’s portfolio of securities at a given time. In an agency relationship, a customer retains legal ownership of the managed property and receives the beneficial interest in the property. Agency relationships terminate upon the death of the customer. In a trust relationship, ownership of and beneficial interest in the trust property are separated. The trustee takes title to the trust property to manage it for the benefit of others. Similar in concept to a mutual fund, except these funds are only available to trust customers as allowed by law or regulation. The term is used in this report to refer to both common trusts and the pooled funds of employee benefit plans. Funds advised by the bank, its subsidiary, or an affiliate. This principle requires a fiduciary to invest assets in a manner similar to that which would be selected by a prudent person of discretion and intelligence who is seeking a reasonable income and preservation of assets. It is a rule of conduct, not of performance. An individual or institution holding title to and managing trust property on behalf of others. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO provided information on the extent to which banks have invested trust assets into proprietary mutual funds, focusing on: (1) the disclosure and consent requirements that apply when trust assets are invested into these funds; (2) whether double fees on invested trust assets are legal; and (3) the regulatory controls that prevent banks from acting in their own self-interest. GAO found that: (1) most banks have not invested trust assets in proprietary mutual funds; (2) the majority of funds invested in bank proprietary mutual funds are from non-trust assets; (3) by the end of 1992, about $24 billion in trust assets had been used to start up proprietary mutual funds which represented about 15 percent of the total assets in these funds; (4) the bank industry believed the trust asset investments were understated because the estimates did not take into account conversions and new trust investments; (5) in 1993, about $45 billion in employee benefit and personal trust assets were invested in short-term money market mutual funds; (6) industry and regulatory officials believe that trust investments are becoming more attractive to investors for tax reasons; (7) investment disclosure requirements vary by state and most states that allow proprietary mutual fund investments do not require beneficiary consent; (8) although 8 states are in compliance with the double fee prohibitions on employee accounts, 27 states allow double fees to be charged; (9) most banks lack incentives to charge double fees because of competition and the possibility of federal penalties and beneficiary lawsuits; (10) when investing trust assets, banks are prohibited from acting in their own self-interest, must justify their investments, and are subject to federal review; and (11) the effectiveness of trust examinations could not be determined, since the number of examinations are limited and use of proprietary mutual funds in trusts is new. |
The concept of using a missile to destroy another missile (hit-to-kill) has been explored since the mid-1950’s, but it was not until 1984 that the first such intercept achieved its objective. Between the mid-1980’s and late-1990’s the United States conducted a number of experiments designed to demonstrate that it was possible to hit one missile with another. In 1997, the Ballistic Missile Defense Organization (BMDO) established the National Missile Defense (NMD) Joint Program Office. The program office was directed to demonstrate by 1999 a system that could protect the United States from attacks of intercontinental ballistic missiles and to be in a position to deploy the system if the threat warranted by 2003. The initial system consisted of space- and ground-based sensors, early warning radars, interceptors, and battle management functions. The program underwent additional changes as the new decade began. In September 2000, the President decided to defer deployment of the NMD system, but development of the system continued with the goal of being ready to deploy the system when directed. This action was followed in 2001 by BMDO’s redirection of the prime contractor’s efforts from developing and deploying an NMD system to developing an integrated test bed with the newly designated GMD system as its centerpiece. The Secretary of Defense, in January 2002, renamed BMDO as MDA and consolidated all ballistic missile defense programs under the new agency. Former missile defense acquisition programs became elements of a single ballistic missile defense system. These changes were followed in December 2002, by the President’s directive to begin fielding in 2004 a ballistic missile defense system, which included components of the GMD element already under development. The GMD element is intended to protect the United States against long- range ballistic missiles in the midcourse phase of their flight. This is the point outside the atmosphere where the motors that boost an enemy missile into space have stopped burning and the deployed warhead follows a predictable path toward its target. Compared to the boost and terminal phases, this stage of flight offers the largest window of opportunity for interception and allows the GMD element a longer time to track and engage a target. As illustrated in figure 1, GMD will rely on a broad array of components to track and intercept missiles. Figure 2 provides a notional concept of how these components will operate once they are fully integrated into the GMD element. MDA is gaining the knowledge it needs to have confidence that technologies critical to the GMD Block 2004 capability will work as intended. Two of the ten technologies essential to the Block 2004 capability have already been incorporated into actual prototype hardware and have been demonstrated to function as expected in an operational environment. Other technologies are reaching this level of maturity. If development and testing proceed as planned, MDA will demonstrate the maturity of five additional technologies by the second quarter of fiscal year 2004 and two critical radar technologies during fiscal year 2005. MDA believes that its best opportunity to demonstrate the maturity of the tenth technology, technology critical to GMD’s primary radar, may come through the anticipated flight tests of foreign missiles. Our work over the years has found that making a decision to begin system integration of a capability before the maturity of all critical technologies have been demonstrated increases the program’s cost, schedule, and performance risks. Because the President directed DOD to begin fielding a ballistic missile defense system in 2004, MDA began GMD system integration with technologies whose maturity has not been demonstrated. As a result, there is a greater likelihood that critical technologies will not work as intended in planned flight tests. If this occurs, MDA may have to spend additional funds in an attempt to identify and correct problems by September 2004 or accept a less capable system. Successful developers follow “knowledge-based acquisition” practices to get quality products to the customer as quickly and cost effectively as possible. As a part of meeting this goal, developers focus their technology programs on maturing technologies that have the realistic potential for being incorporated into the product under consideration. Accordingly, successful developers spend time to mature technology in a technology setting, where costs are typically not as great, and they do not move forward with product development—the initiation of a program to fully design, integrate, and demonstrate a product for production—until essential technologies are sufficiently mature. An analytical tool—which has been used by DOD and the National Aeronautics and Space Administration, called technology readiness levels (TRLs), —can assess the maturity level of technology as well as the risk that technology poses if it is included in a product’s development. The nine readiness levels are associated with progressing levels of technological maturity and demonstrated performance relative to a particular application—starting with paper studies of applied scientific principles (TRL 1) and ending with a technology that has been “flight proven” on an actual system through successful mission operations (TRL 9). Additional details on TRLs are shown in appendix III. TRLs provide a gauge of how much knowledge the program office has on the progress or status of a particular technology and are based on two principal factors: (1) the fidelity of demonstration hardware, including design maturity and level of functionality achieved; and (2) the extent and realism of the environment in which the technology has been demonstrated. MDA recognizes the value of beginning system integration with mature technology and of using TRLs to assess the maturity of technology proposed for a block configuration. In particular, MDA prefers to include new technology in a block configuration only if the technology has reached a TRL 7; that is, only if prototype hardware with the desired form, fit, and function has been proved in an operational environment. However; MDA retains the flexibility to include less mature technology in a block configuration if that technology offers a significant benefit in performance and the risk of retaining it is acceptable and properly managed. Through technical discussions with the GMD joint program office and its prime contractor, we identified ten critical GMD technologies and jointly assessed the readiness level of each. The critical technologies are resident in the exoatmospheric kill vehicle, the boosters, the battle management, command, and control component, and in the element’s radars. In 7 of 10 cases, we agreed with the program office and the GMD prime contractor on the maturity level of the element’s critical technologies. The differences in the remaining three cases, as discussed in detail below, were primarily due to interpretation of TRL definitions. The program office and its contractor rated the two booster technologies and one radar technology at higher readiness levels than, in our opinion, MDA had demonstrated. Most critical GMD technologies are currently at TRLs 5 and 6. At TRL 5, the technology’s development is nearing completion, but it has not been applied or fitted for the intended product. At this point, the technology has been incorporated into a high-fidelity breadboard that has been tested in a laboratory or relevant environment. Although this demonstrates the functionality of the technology to some extent, the hardware is not necessarily of the form and fit (configuration) that would be integrated into the final product. A new application of existing technology is usually assessed at a TRL 5, because the technology has not been demonstrated in the relevant environment for the new application. TRL 6 begins the true “fitting” or application of the technology to the intended product. To reach this level, technology must be a part of a representative prototype that is very close to the form, fit, and function of that needed for the intended product. Reaching a TRL 6 requires a major step in a technology’s demonstrated readiness, that is, the prototype must be tested in a high- fidelity laboratory environment or demonstrated in a restricted but relevant environment. Two of the ten GMD technologies were assessed at a TRL 7, the level that successful developers insist upon before initiating product development. To reach this level, a pre-production prototype of the technology must be demonstrated to its expected functionality in an operational environment. If development and testing proceed as planned by MDA, we judge that most of the technologies (7 of 10) will be at a TRL 7 after the completion of integrated flight test (IFT)-14, which is scheduled for the second quarter of fiscal year 2004. Table 1 summarizes our assessment of the TRL for each critical technology as of June 2003 and the date at which MDA anticipates each technology will reach TRL 7. A detailed discussion of each critical technology follows. The exoatmospheric kill vehicle is the weapon component of the GMD interceptor that attempts to detect and destroy the threat reentry vehicle through a hit-to-kill impact. The prime contractor identified three critical technologies pertaining to the operation of the exoatmospheric kill vehicle. They include the following: Infrared seeker, which is the “eyes” of the kill vehicle. The seeker is designed to support kill vehicle functions like tracking and target discrimination. The primary subcomponents of the seeker are the infrared sensors, a telescope, and the cryostat that cools down the sensors. On-board discrimination, which is needed to identify the true warhead from among decoys and associated objects. Discrimination is a critical function of the hit-to-kill mission that requires the successful execution of a sequence of functions, including target detection, target tracking, and the estimation of object features. As such, successful operation of the infrared seeker is a prerequisite for discrimination. Guidance, navigation, and control subsystem, which is a combination of hardware and software that enables the kill vehicle to track its position and velocity in space and to physically steer itself into the designated target. All three kill vehicle technologies have been demonstrated to some extent in actual integrated flight tests on near-production-representative kill vehicles. The infrared seeker has reached a TRL 7, because a configuration very much like that to be fielded has been demonstrated in previous integrated flight tests, and only minor design upgrades are planned to reach the Block 2004 configuration. The remaining two kill vehicle technologies are at a TRL 6, because their functionality is being upgraded and the technologies have yet to be incorporated into the kill vehicle and demonstrated in an operational environment. The on-board discrimination technology has not yet reached TRL 7 because MDA has not tested a “knowledge database” that is expected to increase the kill vehicle’s discrimination capability. The purpose of the database is to enable the kill vehicle to distinguish characteristics of threatening from non threatening objects. MDA expects to test the database for the first time in IFT-14. As a software-intensive technology, on-board discrimination performance under all flight conditions can only be evaluated through ground testing, but flight-testing is needed to validate the software’s operation in a real world environment. The discrimination capability that will be tested in IFT-14 is expected to be fielded as part of the Block 2004 capability. Therefore, IFT-14 should demonstrate the technology’s maturity if the test shows that the kill vehicle achieves its discrimination objective. Similarly, the guidance, navigation, and control technology will also increase to a TRL 7 if the technology achieves its objectives in IFT-14. The inertial measurement unit, an important component of the guidance, navigation, and control subsystem that enables the kill vehicle to track its position and velocity, has not yet been tested in the severe environments (e.g., vibrations and accelerations) induced by the operational booster. This will be first attempted when one of the new operational boosters is used in IFT-14. In addition to testing the inertial measurement unit, IFT-14 will also test the upgraded divert hardware (used to actively steer the kill vehicle to its target) that is expected to be part of the Block 2004 configuration. The integrated booster stack is the part of the GMD interceptor that is composed of rocket motors needed to deliver and deploy the kill vehicle into a desired intercept trajectory. For all flight tests to date, a two-stage surrogate booster called the payload launch vehicle has been used. In July 1998, the GMD prime contractor began developing a new three-stage booster for the GMD program, known as the “Boost Vehicle”, from commercial off-the-shelf components. However, the contractor encountered difficulty. By the time the booster was flight tested in August 2001, it was already about 18 months behind schedule. The first booster flight test met its objectives, but the second booster tested drifted off course and had to be destroyed 30 seconds after launch. Subsequently, MDA altered its strategy for acquiring a new booster for the interceptor. Instead of relying on a single contractor, MDA authorized the GMD prime contractor to develop a second source for the booster by awarding a subcontract to another contractor. If development of the boosters proceeds as planned, both boosters will be part of the Block 2004 capability. One booster is known as BV+ and the other as “OSC Lite.” The prime contractor ultimately transferred development of the boost vehicle to a subcontractor who is currently developing a variant—known as “BV+”—for the GMD element. The program office and GMD contractor rated the BV+ at a TRL 7. The prime contractor reasoned that the extent of the legacy program and its one successful flight test should allow for this rating. However, given the limited testing to date, we assessed the BV+ booster currently at a TRL 6; that is, the technology has been demonstrated in a restricted flight environment using hardware close in form, fit, and function to that which will be fielded in 2004. We believe the contractor’s assessment is too high at this time, because the step from TRL 6 to TRL 7 is significant in terms of the fidelity of the demonstration environment. However, the first test of a full configuration BV+ booster will occur with IFT-13A, which is scheduled for the first quarter of fiscal year 2004. In our opinion, the BV+ booster will reach TRL 7 at this time if the booster works as planned. The second booster under development is referred to as “OSC Lite”. This booster, which is essentially the Taurus Lite missile that carries satellites into low-earth orbit, will be reconfigured for the GMD element. Despite the fact that the booster was recently tested under restricted flight conditions, GMD’s prime contractor believes that the legacy development of the Taurus Lite missile is sufficient to prove that the OSC Lite has reached TRL 7. However, in our opinion, because the test was conducted with hardware configured as it was in the Taurus missile, not as it will be configured for GMD’s Block 2004, the booster’s maturity level is comparable to that of the BV+. The first flight test of a full configuration OSC Lite booster is scheduled for IFT-13B in the first quarter of fiscal year 2004. We believe that if the booster performs as intended in this test, it will reach TRL 7. The battle management component is the integrating and controlling component of the GMD element. Prime contractor officials identified and assessed the following sub-components as critical technologies: GMD fire control software, which analyzes the threat, plans engagements, and tasks components of the GMD element to execute a mission. In-flight interceptor communications system, which enables the GMD fire control component to communicate with the exoatmospheric kill vehicle while in flight. The two battle management technologies have been demonstrated to some extent in actual integrated flight tests, and both are near their Block 2004 design. We determined that the GMD fire control software has currently achieved a TRL 7 and the in-flight interceptor communications system has reached a TRL 6. Prime contractor officials concur with our assessment. The fire control software is nearing expected functionality and prior software builds have been demonstrated in GMD flight tests. Only minor design changes will be made to address interfacing issues (linking the fire control component with other GMD components) before the software reaches the operational configuration of Block 2004. As a software-intensive technology, the performance of the fire control software throughout the entire “flight envelope” can only be evaluated through ground testing. Ground testing is well underway at both the Joint National Integration Center at Schriever Air Force Base, Colorado, and at the prime contractor’s integration laboratory in Huntsville, Alabama. The second technology associated with the battle management component is the in-flight interceptor communications system. Even though the pointing accuracy and communications capability of this technology were demonstrated in previous flight tests, the operational hardware to be fielded by 2004 is expected to operate at a different uplink frequency than the legacy hardware used in these past flight tests. Accordingly, we assessed the in-flight interceptor communications system at a TRL 6. The first integrated flight test to include an operational-like build of this technology is IFT-14, and if the technology meets its objectives in this flight test, TRL 7 would be achieved. The GMD contractor initially identified the sea-based X-band radar as the only radar-related critical technology. Since its initial assessment in September 2002, the contractor has now agreed with us that the Beale upgraded early warning radar and the Cobra Dane radar are also critical technologies of the GMD element. The contractor and the GMD program office assessed the Beale and Cobra Dane radars at a TRL 5, because the technology, especially mission software, is still under development and has not yet been demonstrated in a relevant flight environment. The contractor assessed the sea-based X-band radar at a TRL 6. As discussed below, we agree with their assessment of the Beale and Cobra Dane radars but rated the sea-based X-band radar as a TRL 5. The early warning radar at Beale Air Force Base has participated in integrated flight tests in a missile-defense role using legacy hardware and developmental software. Design and development of operational builds of the software are progressing, but such builds have only been tested in a simulated environment. Therefore, we assessed the Beale radar technology at a TRL 5—an assessment driven by software considerations. The conversion of the early warning radar at Beale to an upgraded early warning radar, which consists of minor hardware and significant software upgrades, is planned for completion sometime during the middle of fiscal year 2004. After this time, the Beale radar can take part in flight-testing in its upgraded configuration. MDA currently plans to demonstrate the upgraded Beale technology in a non intercept flight test, known as a radar certification flight, in the first quarter of fiscal year 2005. The Beale radar will be demonstrated at a TRL 7 if the objectives of this flight test are achieved. The Cobra Dane radar is currently being used in a surveillance mode to collect data on selected intercontinental ballistic missile test launches out of Russia and does not require real-time data processing and communications capabilities. To achieve a defensive capability by September 2004, the Cobra Dane radar is being upgraded to perform both of these tasks. This upgrade, which requires a number of software modifications, is designed to enable Cobra Dane to detect and track enemy targets much as the Beale upgraded early warning radar does. Although the hardware component of the Cobra Dane radar is mature and will undergo only minor updating, Cobra Dane’s mission software is being revised for this application. The revision includes reuse of existing software and development of new software so that the Cobra Dane radar can be integrated into the GMD architecture. Upgrades to the Cobra Dane radar are due to be completed at the beginning of 2004. After the software is developed and ground tested, the radar can reach a TRL 6, but it is uncertain when the radar will reach a TRL 7. Because of other funding and scheduling priorities, MDA has no plans through fiscal year 2007 for using this radar in integrated flight tests; such tests would require air- or sea-launched targets that are not currently part of the test program. Unless the current test program is modified, the only opportunities for demonstrating Cobra Dane in an operational environment would come from flight tests of foreign missiles. MDA officials anticipate that such opportunities will occur. However, it is not clear that testing Cobra Dane in this manner will provide all of the information that a dedicated test provides because MDA will not control the configuration of the target or the flight environment. The sea-based X-band radar is being built as part of the Block 2004 capability and scheduled for completion in 2005. It will be built from demonstrated technologies—a sea-based platform and the prototype X-band radar currently being used in the GMD test program. Prime contractor officials told us that they consider the risk associated with the construction and checkout of the radar as primarily a programmatic, rather than technical risk, and believe that the sea-based X-band radar has reached a TRL 6. The contractor also stated that the initial operational build of the radar software is developed and currently being tested at the contractor’s integration laboratory. We assessed the sea-based X-band radar as a TRL 5 because the radar has not yet been built and because constructing a radar from an existing design and placing it on a sea-based platform is a new application of existing technology. For example, severe wind and sea conditions may affect the radar’s functionality—conditions that cannot be replicated in a laboratory. As a result, developers cannot be sure that the sea-based X-band radar will work as intended until it is demonstrated in this new environment. However, both we and the contractor agree that the maturity level of the sea-based X-band radar will increase to a TRL 7 if it achieves its test objectives in IFT-18 (scheduled for the fourth quarter of fiscal year 2005). From the program’s inception in 1997 through 2009, MDA expects to spend about $21.8 billion to develop the GMD element. About $7.8 billion of the estimated cost will be needed between 2002 and 2005 to develop and field the Block 2004 GMD capability and to develop the GMD portion of the test bed. However, MDA has incurred a greater risk of cost increases because for more than a year MDA was not sure that it could rely fully upon data from the prime contractor’s Earned Value Management (EVM) system, which provides program managers and others with early warning of problems that could cause cost and schedule growth. Before the restructuring of the GMD program in 2002, about $6.2 billion was spent (between 1997 and 2001) to develop a ground-based defense capability. MDA estimates it will need an additional $7.8 billion between 2002 and 2005 to, among other tasks, install interceptors at Fort Greely, Alaska, and at Vandenberg Air Force Base, California; upgrade existing radars and test bed infrastructure; and develop the sea-based X-band radar that will be added in the fourth quarter of fiscal year 2005. In addition, MDA will invest an additional $7.8 billion between fiscal year 2004 and 2009 to continue efforts begun under Block 2004, such as enhancing capability and expanding the test bed. Table 2, below, provides details on the funding requirements by block and by fiscal year, and figure 3 provides examples of specific Block 2004 tasks. MDA did not include the following costs is its Block 2004 estimate: The cost to recruit, hire, and train military personnel to operate the initial defensive capability and provide site security at various locations, which MDA estimates to be an additional $13.4 million (half in fiscal year 2003 and half in 2004 each), will be needed to operate GMD and provide physical security. Additional costs to cover these personnel throughout the life of the program beginning in 2005 and beyond were also omitted. The cost to maintain equipment and facilities was not included. Systems engineering and national team costs—which benefit all elements, including GMD and cannot be divided among the elements—were not included in MDA’s budget. Because a significant portion of MDA’s Block 2004 GMD cost estimate is the cost of work being performed by the element’s prime contractor, MDA’s ability to closely monitor its contractor’s performance is critical to controlling costs. The tool that MDA, and many DOD entities, have chosen for this purpose is the EVM system. This system uses contractor reported data to provide program managers and others with timely information on a contractor’s ability to perform work within estimated cost and schedule. It does so by examining variances reported in contractor cost performance reports between the actual cost and time of performing work tasks and the budgeted or estimated cost and time. While this tool can provide insightful information to managers, MDA’s use of it has been hampered by several factors. Principally, although major contract modifications were made in February 2002, it took until July 2003 for MDA to complete a review to confirm the reliability of data from the EVM system. An earlier review of a similar nature revealed significant deficiencies in the contractor’s formulation and collection of EVM data. Until a new review was completed, MDA could not be sure about its ability to rely fully upon this data to identify potential problems in time to prevent significant cost growth and schedule delays. An accurate, valid, and current performance management baseline is needed to perform useful analyses using EVM. The baseline identifies and defines work tasks, designates and assigns organizational responsibility for each task, schedules the work task in accordance with established targets, and allocates budget to the scheduled work. According to DOD guidance, a performance management baseline should be in place as early as possible after the contractor is authorized to proceed. Although the guidance does not define how quickly the contractor should establish a baseline, experts generally agree that it should be in place, on average, within 3 months after a contract is awarded or modified. About a year before the Secretary of Defense directed MDA to adopt an evolutionary acquisition strategy, the agency awarded a new contract for the development of a National Missile Defense system. In February 2002, MDA modified this contract to redirect the contractor’s efforts. Instead of developing a missile defense system that met all of the requirements of the war fighter, as the initial contract required, the modification directed the contractor to develop the first GMD increment, or block, which was to be a ballistic missile test bed with GMD as its centerpiece. Following the contract’s modification, the contractor in June 2002 established an interim baseline. This baseline was developed by adding budgets for near-term new work to the original baseline. Because the cost of the work being added to the baseline had not yet been negotiated, the contractor based the budgets on the cost proposed to MDA, as directed by DOD guidelines. The contractor implemented the baseline almost within the 3-month time frame recommended by experts. In the time between the modification and the development of the interim baseline, MDA authorized the contractor to begin work and spend a specified amount of money, and MDA paid the contractor about $390 million during this period. An option that MDA could have used to help validate the interim baseline was to have the Defense Contract Management Agency (DCMA) verify contractor work packages and track the movement of funds between the unpriced work account and the baseline. However, neither MDA nor DCMA initiated these actions. In its technical comments on a draft of this report, DOD pointed out that during the negotiation process, MDA reviews prime and subcontractor proposal data that include engineering labor hours, material, and cost estimates. DOD further noted that these estimates eventually form a basis for the work packages that make up the data for the performance management baseline. We agree that these costs will eventually be associated with the work packages that make up the baseline. However, a joint contractor and MDA review of the initial GMD baseline concluded that even though these costs were otherwise fair and reasonable, some work packages that the contractor developed for the original contract’s baseline did not correctly reflect the work directed by MDA. An independent review of work packages included in the interim baseline would have increased the likelihood that the work packages were being properly developed and that their budget and schedule were appropriate. The contractor completed all revisions to the baseline for the prime contractor and all five subcontractors by March 2003, 3 months after negotiating the cost of the modification and 13 months after authorizing the work to begin. The contracting officer explained that it took until December 2002 to negotiate the 2002 contract change because the additional work was extremely complex, and, as a result, the modification needed to be vetted through many subcontractors that support the prime. The DOD guidance states that an integrated baseline review (IBR) is to be conducted within 6 months of award of a new contract or major change to an existing contract. The review verifies the technical content of the baseline. It also ensures that contractor personnel understand and have been adequately trained to collect EVM data. The review also verifies the accuracy of the related budget and schedules, ensures that risks have been properly identified, assesses the contractor’s ability to implement properly EVM, and determines if the work identified by the contractor meets the program’s objectives. The government’s program manager and technical staff carry out this review with their contractor counterparts. Completing an IBR of the new baseline has been of particular importance because the July 2001 IBR for the initial contract identified more than 300 deficiencies in the contractor’s formulation and execution of the baseline. For example, the contractor had not defined a critical path for the overall effort, many tasks did not have sufficient milestones that would allow the contractor to objectively measure performance, and contractor personnel who were responsible for reporting earned value were making mistakes in measuring actual performance against the baseline. MDA began a review in March 2003 of the contractor’s new baseline, which reflected the contract modification,. Completing this IBR took until July 2003 because of the complexity of the program and the many subcontractors that were involved. Although the review team found fewer problems with the contractor’s formulation and execution of the new baseline, problems were identified. For example, the IBR showed that in some cases the baseline did not reflect the new statement of work. Also, both the prime contractor and subcontractors improperly allocated budget to activities that indirectly affect a work product (known as level of effort activities) when they could have associated these activities with a discrete end product. Because of the way these activities are accounted for, this designation could mask true cost variances. Before the IBR was underway, DCMA recognized another problem with the contractor’s EVM reports. In its December 2002 cost performance report, the contractor reported that it expected no cost overrun at contract completion. This implied that the program was not experiencing any problems that could result in significant cost or schedule growth. However, DCMA stated that October 2002 was the second month in a row that the contractor had used management reserve funds to offset a significant negative cost variance. DCMA emphasized that this is not the intended purpose of management reserves. (Management reserves are a part of the total project budget intended to be used to fund work anticipated but not currently defined.) DCMA officials told us that while this is not a prohibited practice most programs wait until their work is almost completed, that is 80 to 90 percent complete, before making a judgment that the management reserve would not be needed for additional undefined work and could be applied to unfavorable contract cost variances. Because of the President’s direction to begin fielding a ballistic missile defense system in 2004, the MDA took a higher risk approach by beginning GMD system integration before knowing whether its critical technologies were mature. If development and testing progress as planned, however, MDA expects to have demonstrated the maturity of 7 of the 10 critical GMD technologies before the element is initially fielded in September 2004 and 2 others during fiscal year 2005. If technologies do not achieve their objectives during testing, MDA may have to spend additional funds in an attempt to identify and correct problems by September 2004 or accept a less capable system. Because of other funding and scheduling priorities, MDA does not plan to demonstrate through integrated flight tests whether the Cobra Dane radar’s software can process and communicate data on the location of enemy missiles in “real time.” Although tests using sea- or air-launched targets before September 2004 would provide otherwise unavailable information on the software’s performance, we recognize those tests would be costly and funds have not been allocated for that purpose. We also recognize that the most cost efficient means of testing the Cobra Dane radar is through launches involving foreign test missiles. However, we believe it would be useful for MDA to consider whether the increased confidence provided by a planned test event outweighs other uses for those funds. MDA is investing a significant amount of money to achieve an operational capability during the first block of GMD’s development, and the agency expects to continue investing in the element’s improvement over the next several years. Because MDA is also developing other elements and must balance its investment in each, it needs an accurate GMD cost estimate. If it is used as intended, the EVM system can be an effective means of monitoring one of GMD’s largest costs, the cost of having a contractor develop the GMD system. It is understandable that the dynamic changes in MDA’s acquisition strategy led to major contract modifications, which made it more difficult for the contractor to establish a stable baseline. However, in this environment, it is even more important that MDA find ways to ensure the integrity of the interim baselines and to quickly determine that revised baselines can be fully relied on to identify potential problems before they significantly affect the program’s cost. To increase its confidence that the Ground-based Midcourse Defense element fielded in 2004 will operate as intended, we recommend that the Secretary of Defense direct the Director, Missile Defense Agency, to explore its options for demonstrating the upgraded Cobra Dane radar in its new ballistic missile defense role in a real-world environment before September 2004. To improve MDA’s oversight of the GMD element and to provide the Congress with the best available information for overseeing the program, we recommend that the Secretary of Defense direct the Director, Missile Defense Agency, to: ensure that when a contractor is authorized to begin new work before a price is negotiated that DCMA validate the performance measurement baseline to the extent possible by (1) tracking the movement of budget from the authorized, unpriced work account into the baseline, (2) verify that the work packages accurately reflect the new work directed, and (3) report the results of this effort to MDA; and strive to initiate and complete an integrated baseline review (IBR) of any major contract modifications within 6 months. DOD’s comments on our draft report are reprinted in appendix II. DOD concurred with our first recommendation. DOD stated that MDA is exploring its options for demonstrating, prior to 2004, the upgraded Cobra Dane radar in a real-world environment. However, DOD noted that because it takes considerable time to develop and produce targets and to conduct safety and environmental assessments, completing a Cobra Dane radar test before September 2004 would be very challenging. DOD concluded that “targets of opportunity” (flight tests of foreign missiles) and ground testing may provide the best means to demonstrate the radar’s maturity in the near term. DOD partially concurred with our second recommendation. In responding to the first part of recommendation two, DOD stated that MDA and the DCMA will jointly determine the feasibility of tracking the budget for authorized, unpriced work into the baseline and will concurrently assess work package data while establishing the formal performance measurement baseline. DOD also stated that a selected portion of this work is already being accomplished by DCMA. We continue to believe in the feasibility of our recommendation. DCMA officials told us that they could monitor the movement of budget into the baseline and verify the work packages associated with the budget. In addition, the guidelines state that surveillance may be accomplished through sampling of internal and external data. We believe that if DCMA sampled the data as it is transferred into the baseline, the implementation of this recommendation should not be burdensome. In responding to the second part of recommendation two, DOD stated that MDA will continue to adhere to current DOD policy by starting an IBR of any major contract modification within 6 months. MDA correctly pointed out that DOD’s Interim Defense Acquisition Guidebook only requires a review be initiated within 6 months (180 days) after a contract is awarded or a major modification is issued. However, DOD’s Earned Value Management Implementation Guide states that such a review is conducted within 6 months. Similar language is found in the applicable clause from the GMD contract, which states that such reviews shall be scheduled as early as practicable and should be conducted within 180 calendar days after the incorporation of major modifications. While we understand the difficulty of conducting reviews within 180 days when the contract is complex and many subcontractors are involved, we believe that it is important for the government to complete an IBR as soon as possible to ensure accurate measurement of progress toward the program’s cost, schedule, and performance goals. DOD also provided technical comments to this report, which we considered and implemented as appropriate. In its technical comments, for example, DOD expressed particular concern that our draft report language asserting MDA’s inability to rely on the EVM system was unsupported and misleading. DOD also stated that its prime contractor’s EVM system is reliable. It stated, for example, that MDA has reviewed, and continues to review on a monthly basis, the contractor’s cost performance reports and that the prime contractor’s EVM system and accounting systems have been fully certified and validated by DCMA. We modified our report to better recognize MDA’s ability to use and trust the EVM system. However, we still believe that MDA would benefit from taking additional measures to increase its confidence in the accuracy of its interim baselines. Also, when the revised baseline is in place, a review of its formulation and execution is necessary before MDA can confidently and fully rely on data from the EVM system. We conducted our review from December 2001 through August 2003 in accordance with generally accepted government auditing standards. As arranged with your staff, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its issue date. At that time, we plan to provide copies of this report to interested congressional committees, the Secretary of Defense, and the Director, Missile Defense Agency. We will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov/. If you or your staff have any questions concerning this report, please contact me on (202) 512-4841. Major contributors to this report are listed in appendix V. To determine when MDA plans to demonstrate the maturity of technologies critical to the performance of GMD’s Block 2004 capability, we reviewed their critical technologies using technology readiness levels (TRLs) developed by the National Aeronautics and Space Administration and used by DOD. We did so by asking contractor officials at the Boeing System Engineering and Integration Office in Arlington, Virginia, to identify the most critical technologies and to assess the level of maturity of each technology using definitions developed by the National Aeronautics and Space Administration. We reviewed these assessments along with program documents, such as the results of recent flight tests and discussed the results with contractor and agency officials in order to reach a consensus, where appropriate, on the readiness level for each technology and identify the reasons for any disagreements. In reviewing the agency’s current cost estimate to develop the first block of the GMD element and its test bed, we reviewed and analyzed budget backup documents, cost documents, and selected acquisition reports for the GMD program extending over a period of several years. We also met with program officials responsible for managing the development and fielding of the GMD Block 2004 capability. For example, we met with officials from the GMD Joint Program Office in Arlington, Virginia, and Huntsville, Alabama; and the Office of the Deputy Assistant for Program Integration at the MDA, Arlington, Virginia. To determine whether there were any significant risks associated with the estimate, we met with agency officials responsible for determining the cost of the GMD element to find out if there were costs that were omitted, but should have been included, in the estimate. We also analyzed data from cost performance reports that the GMD contractor developed for the MDA. We reviewed data from the GMD element and contracting officials and conducted interviews to discuss the data. Although we did not independently verify the accuracy of the cost performance reports we received from MDA, the data were assessed independently by DCMA. Appendix III: Technology Readiness Level Assessment Matrix Lowest level of technology readiness. Scientific research begins to be translated into applied research and development. Examples might include paper studies of a technology’s basic properties. None (paper studies and analysis) Invention begins. Once basic principles are observed, practical applications can be invented. The application is speculative, and there is no proof or detailed analysis to support the assumption. Examples are still limited to paper studies. None (paper studies and analysis) Active research and development is initiated. This includes analytical studies and laboratory studies to physically validate analytical predictions of separate elements of the technology. Examples include components that are not yet integrated or representative. Analytical studies and demonstration of nonscale individual components (pieces of subsystem). Low fidelity breadboard. Integration of nonscale components to show pieces will work together. Not fully functional or form or fit but representative of technically feasible approach suitable for flight articles. Basic technological components are integrated to establish that the pieces will work together. This is relatively “low fidelity” compared to the eventual system. Examples include integration of “ad hoc” hardware in a laboratory. Fidelity of breadboard technology increases significantly. The basic technological components are integrated with reasonably realistic supporting elements so that the technology can be tested in a simulated environment. Examples include “high fidelity” laboratory integration of components. High fidelity breadboard. Functionally equivalent but not necessarily form and/or fit (size, weight, materials, etc). Should be approaching appropriate scale. May include integration of several components with reasonably realistic support elements/subsystems to demonstrate functionality. Lab demonstrating functionality but not form and fit. May include flight- demonstrating breadboard in surrogate aircraft. Technology ready for detailed design studies. Representative model or prototype system, which is well beyond the breadboard tested for TRL 5, is tested in a relevant environment. Represents a major step up in a technology’s demonstrated readiness. Examples include testing a prototype in a high fidelity laboratory environment or in simulated operational environment. Prototype. Should be very close to form, fit, and function. Probably includes the integration of many new components and realistic supporting elements/subsystems if needed to demonstrate full functionality of the subsystem. High-fidelity lab demonstration or limited/restricted flight demonstration for a relevant environment. Integration of technology is well defined. Prototype near or at planned operational system. Represents a major step up from TRL 6, requiring the demonstration of an actual system prototype in an operational environment, such as in an aircraft, on a vehicle or in space. Examples include testing the prototype in a test bed aircraft. Prototype. Should be form, fit and function integrated with other key supporting elements/subsystems to demonstrate full functionality of subsystem. Flight demonstration in representative operational environment such as flying test bed or demonstrator aircraft. Technology is well substantiated with test data. Technology has been proven to work in its final form and under expected conditions. In almost all cases, this TRL represents the end of true system development. Examples include developmental test and evaluation of the system in its intended weapon system to determine if it meets design specifications. Actual application of the technology in its final form and under mission conditions, such as those encountered in operational test and evaluation. In almost all cases, this is the end of the last “bug fixing” aspects of true system development. Examples include using the system under operational mission conditions. Pulling together essential cost, schedule, and technical information in a meaningful, coherent fashion is always a challenge for any program. Without this information, management of the program will be fragmented, presenting a distorted view of program status. For several decades, DOD has compared the value of work performed to the work’s actual cost. This measurement is referred to as Earned Value Management (EVM). Earned value goes beyond the two-dimensional approach of comparing budgeted costs to actuals. It attempts to compare the value of work accomplished during a given period with the work scheduled for that period. By using the value of completed work as a basis for estimating the cost and time needed to complete the program, the earned value concept should alert program managers to potential problems early in the program. In 1996, in response to acquisition reform initiatives, DOD reemphasized the importance of earned value in program management and adopted 32 criteria for evaluating the quality of management systems. These 32 criteria are organized into 5 basic categories: organization, planning and budgeting, accounting considerations, analysis and management reports, and revisions and data maintenance. The 32 criteria are listed in table 1. In general terms, the criteria require contractors to (1) define the contractual scope of work using a work breakdown structure; (2) identify organizational responsibility for the work; (3) integrate internal management subsystems; (4) schedule and budget authorized work; (5) measure the progress of work based on objective indicators; (6) collect the cost of labor and materials associated with the work performed; (7) analyze any variances from planned cost and schedules; (8) forecast costs at contract completion; and (9) control changes. The criteria have become the standard for EVM and have also been adopted by major US government agencies, industry, and the governments of Canada and Australia. The full application of EVM system criteria is appropriate for large cost reimbursable contracts where the government bears the cost risk. For such contracts, the management discipline described by the criteria is essential. In addition, data from an EVM system have been proven to provide objective reports of contract status, allowing numerous indices and performance measures to be calculated. These can then be used to develop accurate estimates of anticipated costs at completion, providing early warning of impending schedule delays and cost overruns. The standard format for tracking earned value is through a Cost Performance Report (CPR). The CPR is a monthly compilation of cost, schedule and technical data which displays the performance measurement baseline, any cost and schedule variances from that baseline, the amount of management reserve used to date, the portion of the contract that is authorized unpriced work, and the contractor’s latest revised estimate to complete the program. As a result, the CPR can be used as an effective management tool because it provides the program manager with early warning of potential cost and schedule overruns. Using data from the CPR, a program manager can assess trends in cost and schedule performance. This information is useful because trends tend to continue and can be difficult to reverse. Studies have shown that once programs are 15 percent complete the performance indicators are indicative of the final outcome. For example, a CPR showing a negative trend for schedule status would indicate that the program is behind schedule. By analyzing the CPR, one could determine the cause of the schedule problem such as delayed flight tests, changes in requirements, or test problems because the CPR contains a section that describes the reasons for the negative status. A negative schedule condition is a cause for concern, because it can be a predictor of later cost problems since additional spending is often necessary to resolve problems. For instance, if a program finishes 6 months later than planned, additional costs will be expended to cover the salaries of personnel and their overhead beyond what was originally expected. CPR data provides the basis for independent assessments of a program’s cost and schedule status and can be used to project final costs at completion in addition to determining when a program should be completed. Examining a program’s management reserve is another way that a program can use a CPR to determine potential issues early on. Management reserves, which are funds that may be used as needed, provide flexibility to cope with problems or unexpected events. EVM experts agree that transfers of management reserve should be tracked and reported because they are often problem indicators. An alarming situation arises if the CPR shows that the management reserve is being used at a faster pace than the program is progressing toward completion. For example, a problem would be indicated if a program has used 80 percent of its management reserve but only completed 40 percent of its work. A program’s management reserve should contain at least 10 percent of the cost to complete a program so that funds will always be available to cover future unexpected problems that are more likely to surface as the program moves into the testing and evaluation phase. In addition to the individual named above Yvette Banks, Myra Watts Butler, Cristina Chaplain, Roger Corrado, Jennifer Echard, Dayna Foster, Matt Lea, Karen Richey, and Randy Zounes made key contributions to this report. | A number of countries hostile to the United States and its allies have or will soon have missiles capable of delivering nuclear, biological, or chemical weapons. To counter this threat, the Department of Defense's (DOD's) Missile Defense Agency (MDA) is developing a system to defeat ballistic missiles. MDA expects to spend $50 billion over the next 5 years to develop and field this system. A significant portion of these funds will be invested in the Ground-based Midcourse Defense (GMD) element. To field elements as soon as practicable, MDA has adopted an acquisition strategy whereby capabilities are upgraded as new technologies become available and is implementing it in 2-year blocks. Given the risks inherent to this strategy, GAO was asked to determine when MDA plans to demonstrate the maturity of technologies critical to the performance of GMD's Block 2004 capability and to identify the estimated costs to develop and field the GMD element and any significant risks with the estimate. GMD is a sophisticated weapon system being developed to protect the United States against limited attacks by long-range ballistic missiles. It consists of a collection of radars and a weapon component--a three-stage booster and exoatmospheric kill vehicle--integrated by a centralized control system that formulates battle plans and directs the operation of GMD components. Successful performance of these components is dependent on 10 critical technologies. MDA expects to demonstrate the maturity of most of these technologies before fielding the GMD element, which is scheduled to begin in September 2004. However, the agency has accepted higher cost and schedule risks by beginning integration of the element's components before these technologies have matured. So far, MDA has matured two critical GMD technologies. If development and testing progress as planned, MDA expects to demonstrate the maturity of five other technologies by the second quarter of fiscal year 2004. The radar technologies are the least mature. MDA intends to demonstrate the maturity of an upgraded early warning radar in California in the first quarter of fiscal year 2005 and a sea-based radar in the Pacific Ocean in the fourth quarter of that year. Although MDA does not plan to demonstrate the maturity of the technology of the early warning radar in Alaska, which will serve as the primary fire control radar, through its own integrated flight tests, it may be able to do so through the anticipated launch of foreign test missiles. MDA estimates that it will spend about $21.8 billion between 1997 and 2009 to develop the GMD element. This estimate includes $7.8 billion to develop and field the GMD Block 2004 capability. For example, the funds will be used to install interceptors at two sites, upgrade existing radars and testing infrastructure, and develop the sea-based X-band radar. We found that MDA has incurred a greater risk of cost growth because for more than a year the agency was not able to rely fully on data from its primary tool for monitoring whether the GMD contractor has been performing work within cost and on schedule. In February 2002, MDA modified the prime contract to reflect an increased scope of work for developing GMD. It was not until July 2003 that the agency completed a review to ensure that the data was fully reliable. |
DOD defines sexual assault as intentional sexual contact, characterized by use of force, threats, intimidation, abuse of authority, or when the victim does not or cannot consent. The term includes a broad category of sexual offenses consisting of the following specific Uniform Code of Military Justice offenses: rape, sexual assault, aggravated sexual contact, abusive sexual contact, forcible sodomy (forced oral or anal sex), or attempts to commit these acts. CDC is one of the major operating components of the Department of Health and Human Services, which serves as the federal government’s principal agency for protecting the health of all U.S. citizens. As part of its health-related mission, CDC serves as the national focal point for developing and applying disease prevention and control, environmental health, and health promotion and education activities. Specifically, CDC, among other things, conducts research to enhance prevention, develops and advocates public-health policies, implements prevention strategies, promotes healthy behaviors, fosters safe and healthful environments, and provides associated training. In 1992, CDC established the National Center for Injury Prevention and Control as the lead federal organization for violence prevention. The center’s Division of Violence Prevention focuses on stopping violence, including sexual violence, before it begins, and it works to achieve this by conducting research on the factors that put people at risk for violence, examining the effective adoption and dissemination of prevention strategies, and evaluating the effectiveness of violence-prevention programs. In addition, CDC operates the Rape Prevention and Education grant program in all 50 states, the District of Columbia, Puerto Rico, and four U.S. territories to strengthen sexual-violence prevention efforts at the local, state, and national level. In 2004, CDC published a framework for effective sexual-violence prevention strategies. This framework includes prevention concepts and strategies, such as identifying risk and protective factors (i.e., factors that may put a person at risk for committing sexual assault or that, alternatively, may prevent harm). CDC suggests that grantees of the Rape Prevention and Education program use this framework as a foundation for planning, implementing, and evaluating activities conducted. Since fiscal year 2004, Congress has mandated, and in response DOD has implemented, a number of improvements to its sexual-assault prevention and response program. For example, in 2004, Congress required the Secretary of Defense to develop a comprehensive policy for DOD on the prevention of and response to sexual assaults involving servicemembers and to submit an annual report that includes, among other things, data on reported incidents within each military service and the results of an evaluation of the effectiveness of DOD’s sexual-assault prevention and response policy. In 2005, DOD established its sexual- assault prevention and response program to promote the prevention of sexual assault, to encourage increased reporting of such incidents, and to improve victim response capabilities; and DOD has issued annual reports tracking the number of sexual assaults reported each year. Since that time, DOD has undertaken a variety of activities both to prevent sexual assaults from occurring and to increase the department’s visibility over and awareness of sexual-assault incidents that do occur. Specifically, in response to statutory requirements, DOD has provided active-duty servicemembers with two options for reporting a sexual assault: (1) restricted and (2) unrestricted. DOD’s restricted reporting option allows sexual-assault victims to confidentially disclose an alleged sexual assault to select individuals and receive medical and mental health-care treatment without initiating an official investigation. In cases where a victim elects restricted reporting, first responders may not disclose confidential communications to law-enforcement or command authorities unless certain exceptions apply, and improper disclosure of confidential communications and medical information may result in discipline pursuant to the Uniform Code of Military Justice or other adverse personnel actions. In contrast, DOD’s unrestricted reporting option triggers an investigation by a military criminal-investigative organization. In an effort to increase victims’ confidence in the military- justice process and to encourage reporting, DOD revised its sexual- assault prevention and response policy in January 2012 to protect victims of sexual assault from coercion, retaliation, and reprisal. Various offices and organizations within DOD play a role in preventing and responding to sexual assault within the military. The Under Secretary of Defense for Personnel and Readiness is responsible for developing the overall policy and guidance for the department’s sexual-assault prevention and response program, except for criminal-investigative policy matters assigned to the DOD Inspector General and legal processes in the Uniform Code of Military Justice. Accordingly, the Under Secretary of Defense for Personnel and Readiness oversees SAPRO, which serves as the department’s single point of authority, accountability, and oversight for its sexual-assault prevention and response program. The responsibilities of the Under Secretary of Defense for Personnel and Readiness and SAPRO with regard to sexual-assault prevention and response include providing the military services with guidance and technical support and facilitating the identification and resolution of issues; developing programs, policies, and training standards for the prevention of, reporting of, and response to sexual assault; developing strategic program guidance and joint planning objectives; overseeing the department’s collection and maintenance of data on reported alleged sexual assaults involving servicemembers; establishing mechanisms to measure the effectiveness of the department’s sexual-assault prevention and response program; and preparing the department’s mandated annual reports to Congress on sexual assaults involving servicemembers. Each military service has established a sexual-assault prevention and response office that is responsible for overseeing and managing the service’s sexual-assault program. Each military service has also established the SARC position to serve as the single point of contact for ensuring that sexual-assault victims receive appropriate and responsive care and are generally responsible for implementing their respective services’ SAPR program. According to DOD’s instruction, commanders, supervisors, and managers at all levels are responsible for the effective implementation of both the policy and the program. Other responders include victim advocates, judge advocates, medical and mental health- care providers, criminal-investigative personnel, law-enforcement personnel, and chaplains. The Secretaries of the military departments are responsible for establishing policies to implement the sexual-assault prevention and response program and procedures, and ensuring compliance with DOD’s policy. Further, they are responsible for establishing policies that ensure commander accountability for program implementation and execution. Each military service maintains a primary policy document on its sexual- assault prevention and response program. Much like DOD’s directive and instruction on sexual-assault prevention and response, the service policies outline responsibilities of relevant stakeholders, including commanders, SARCs, and victim advocates, and training requirements for all personnel. DOD developed its sexual-assault prevention strategy in 2014 using CDC’s framework for effective sexual-violence prevention strategies, but DOD did not link prevention activities to desired outcomes or fully identify risk and protective factors. Specifically, DOD identified 18 prevention- related activities in its strategy, but did not specify how these activities are linked with the desired outcomes of the department’s overall prevention efforts. Further, in adapting CDC’s framework to address the unique nature of the military environment, DOD did not fully identify risk and protective factors (i.e., factors that may put a person at risk for committing sexual assault or that, alternatively, may prevent harm) in its updated strategy. In April 2014, DOD published an updated prevention strategy using concepts from CDC’s framework for effective sexual-violence prevention strategies. Following guidance outlined by the Secretary of Defense in the 2013 Department of Defense Sexual Assault Prevention and Response Strategic Plan, SAPRO developed and executed a sexual-assault prevention campaign to identify evidence-based prevention practices and lessons learned, in order to update the department’s 2008 Sexual Assault Prevention Strategy. Though not required to do so, DOD consulted with and incorporated CDC’s framework and prevention-related concepts into its prevention strategy. Specifically, DOD incorporated CDC’s concept that defines the different levels at which prevention efforts occur and another CDC concept that describes the importance of identifying and understanding the domains in which sexual violence takes place. For example, in the “Defining Prevention” section of its strategy, DOD notes that it adopted the CDC’s concept that there are three levels of prevention based on when the prevention efforts occur: Primary Prevention: Approaches that take place before sexual violence has occurred to prevent initial perpetration. Secondary Prevention: Immediate responses after sexual violence has occurred to address the early identification of victims and the short-term consequences of violence. Tertiary Prevention: Long-term responses after sexual violence has occurred to address the lasting consequences of violence and sex- offender treatment interventions. According to DOD, primary prevention is at the core of its focus in developing prevention-related activities, which seek to reduce, with the goal of eliminating factors leading to or associated with, sexual violence, thereby stopping the crime before it occurs. DOD further states that its prevention programs will not rely solely on the training and education of individuals considered to be at risk. Rather, DOD states that its focus on primary prevention will involve empowered and competent individuals interacting in an environment that has been sustained to promote the best possible outcomes. DOD’s strategy also incorporates CDC’s concept that there are risk and protective factors that influence the occurrence of sexual violence. According to CDC, identification of risk factors and protective factors is a key step in developing an effective prevention strategy in that it helps to build an understanding of the circumstances—both positive and negative—that may play a role in the perpetration of such incidents. To further enhance the effectiveness of its efforts, CDC categorizes these factors relative to the four domains in which they are identified to exist: (1) society; (2) community; (3) relationship; and (4) individual. According to CDC, this enables an organization to tailor its prevention strategy based on the characteristics of a specific population. In its 2014–16 prevention strategy, DOD adapted CDC’s approach by identifying five domains in which it would focus its prevention efforts: (1) society; (2) the military community (DOD/services/units); (3) leaders at all levels; (4) relationships; and (5) individuals. As depicted in figure 1, while DOD’s model largely mirrors the one created by CDC, it also included “leaders” as a distinct domain of influence because, according to its 2014–16 strategy, the department wanted to recognize the essential role of leadership and to highlight the necessity that commanders and their staffs develop and execute tactics that target this “center of gravity” for prevention efforts. DOD further notes in its strategy that the inherent complexities of preventing sexual assault necessitates that a number of interventions that span multiple levels must take place to achieve the greatest, and most lasting impact. DOD’s strategy also incorporates key concepts that CDC has identified as being included in the public health approach to prevention. We reviewed DOD’s 2014–16 prevention strategy and found that it identifies, and categorizes according to the applicable domain, the four concepts that CDC identified as being included in effective public health strategies, including (1) inputs, (2) activities, (3) outputs, and (4) outcomes. For example, within its society domain, DOD identifies inputs, or the resources on which the effectiveness of an effort depends, such as community volunteers and collaboration with federal partners, coalitions, and other primary prevention experts. In addition, DOD’s strategy specifies outputs, which are the direct products of implemented activities and are different from outcomes, which are also included in DOD’s strategy and defined as the intended effect of these activities. For example, DOD identifies the development of courses that instruct and empower members as one of the outputs of its efforts, whereas it notes that the establishment and maintenance of a culture that supports the prevention of sexual assault is a desired outcome of its efforts within the “leaders” domain. DOD’s strategy identifies 18 prevention-focused activities that it plans to implement as part of its effort to prevent sexual assault, but it does not link these activities to desired outcomes. According to CDC, effective public-health strategies establish a link between activities and their intended outcomes to help determine whether the actual events that take place as part of a program will logically lead to the intended effect. Further, providing a step-by-step roadmap can help identify gaps in program logic that might not otherwise be apparent; persuade skeptics that progress is being made in the right direction, even if the destination has not yet been reached; and aid program managers in identifying what needs to be emphasized right now or what can be done to accelerate progress. In addition to CDC guidance, DOD’s Strategic Management Plan for Fiscal Years 2014–2015 identifies the alignment of activities and goals as a key step in achieving desired outcomes. Our prior work on effective agency strategic reviews has also shown that it is important to review progress toward strategic objectives in that it can help to determine subsequent actions, and that leaders and responsible managers should be held accountable for knowing the progress being made in achieving outcomes. In DOD’s 2014–16 prevention strategy, DOD identifies 18 targeted activities, the general time frame in which they are to be accomplished, and the office(s) responsible for their implementation. Specifically, DOD’s strategy includes activities such as conducting specialized leader sexual- assault prevention training, establishing collaboration forums to capture and share prevention best practices and lessons learned, and incorporating specific sexual-assault monitoring, measures, and education into normal command training, readiness assessments, and safety forums. In a different section of DOD’s strategy, it lists five general outcomes of its prevention efforts such as acceptance and endorsement of the values that seek to prevent sexual assault and an environment in which servicemembers’ networks support a culture of sexual-assault prevention. Although both are identified in the strategy, DOD does not discuss what, if any, connection exists between activities and outcomes in the department’s efforts to prevent sexual assault. During our review, we spoke with DOD officials responsible for developing the department’s strategy who acknowledged that while it was modeled after CDC’s guidance on effective public-health strategies, it did not specify how the activities and outcomes identified in the strategy are linked. According to these officials, the department did not link its prevention activities with outcomes because of a complex interplay that exists between these elements. For example, officials described a scenario in which a single prevention activity could be connected with multiple outcomes. We recognize that such a scenario is possible and believe that it reinforces the importance of understanding how specific activities are expected to contribute to desired outcomes. Thus, without a defined link, DOD may not be able to determine which activities are having the desired effect or, when necessary, to make timely and informed adjustments to its efforts to help ensure it continues to progress toward desired outcomes. Furthermore, DOD may lack the information that is needed to conduct a rigorous evaluation of the effectiveness of its efforts to prevent sexual assault. DOD’s strategy is based on CDC’s framework for effective sexual- violence prevention strategies, and it addresses some but not all of the elements that CDC identified as necessary to maximize the effectiveness of prevention efforts. According to CDC, there are factors that may put a person at risk for sexual-violence perpetration and victimization while other factors may prevent them from harm. Specifically referred to as risk factors and protective factors, CDC’s work has demonstrated that by identifying such influences—relative to the domain or environment in which they exist—organizations can focus their efforts on eliminating factors that promote sexual violence while also supporting the factors that prevent it. In addition to CDC’s work on prevention strategies, the Office of Management and Budget issued guidance in 2015 on agencies’ strategic reviews in which it acknowledged that while agencies cannot mitigate all risks related to achieving strategic objectives and performance goals, they should identify, measure, and assess challenges related to mission delivery, to the extent possible. As noted previously, DOD adapted CDC’s framework for sexual-violence prevention strategies by identifying five domains to which it would tailor its prevention program, including: (1) society; (2) the military community (DOD/services/units); (3) leaders at all levels; (4) relationships; and (5) individuals. We reviewed DOD’s strategy and found that it includes risk factors identified by CDC for three of these domains—individuals, relationships, and society. For example, within the individual risk domain, DOD identified factors such as alcohol and drug use and hostility toward women as risks that may influence sexual violence. Within the relationship domain, DOD identified factors such as associating with sexually aggressive and delinquent peers and having an emotionally unsupportive familial environment as possible influences on the incidence of sexual violence. However, DOD does not specify risk factors for the two domains over which it potentially has the greatest influence—leaders at all levels of DOD and the military community (i.e., DOD/services/units). For example, the strategy does not identify potential risk factors associated with these domains, such as recognizing that the inherent nature of certain types of commands or units may cultivate an environment in which there is an increased risk of sexual assault. While not specifically tailored to its military-community domain, DOD’s prevention strategy includes risk factors that CDC had identified as generally applicable to the community domain. For example, DOD’s 2014–16 prevention strategy identifies general tolerance of sexual violence within the community and weak community sanctions against sexual-violence perpetrators as risk factors for that category. While these risk factors may generally apply to DOD, they do not meet CDC’s criteria for effective prevention programs because DOD did not identify risk factors and mitigation techniques based on the unique aspects of the military-community domain. According to officials with DOD SAPRO, they did not identify risk factors specific to DOD’s military community and leaders domains because insufficient research existed on risk factors for these domains and the department did not independently take steps to identify relevant risk factors prior to the strategy’s publication. A senior DOD official added that DOD asked the RAND Corporation, after the strategy was published, to analyze risk factors, including the military community and leaders domains, as a part of its work on the 2014 Military Workplace Study. In its fiscal year 2014 Department of Defense Annual Report on Sexual Assault in the Military, DOD reported on the findings of RAND’s analysis, which included several risk factors for the military-community domain such as differences between service branches as well as between the active-duty and reserve components. RAND did not conduct a similar analysis to identify risk factors for DOD’s leader domain. DOD also included six protective factors identified by CDC in its prevention strategy, but it does not specify how they relate to the five domains. For example, emotional health and connectedness were listed as protective factors for high-school boys that may help to curb the initiation of sexual violence. For high-school girls, academic achievement was listed as a factor that may reduce their exposure to sexual violence. However, the protective factors that DOD included in its strategy are grouped together in a general category rather than being listed under the domain to which they corresponds. During our review, we spoke with a senior DOD official responsible for developing the strategy who acknowledged that more research is needed to identify risk and protective factors for each of the domains in its model. Without a more comprehensive list of such factors that correspond to each of the domains in its strategy, DOD may be limited in its ability to take an evidence-based approach to the prevention of sexual assault. Further, DOD may not be able to accurately characterize the environment in which sexual assaults occur or to develop activities and interventions to more effectively prevent them. Table 1 provides additional details about the risk factors identified by domain in DOD’s 2014-16 Sexual Assault Prevention Strategy. DOD and the military services developed and are in the process of implementing prevention-focused activities, but they have not taken steps to help ensure that activities developed at the local level are consistent with the overarching objectives of DOD’s strategy. As noted previously, DOD’s 2014–16 prevention strategy identifies 18 prevention-focused activities and, according to SAPRO officials, 2 have been implemented and efforts to address the remaining 16 are ongoing. In addition to the activities listed in DOD’s strategy, installation-based personnel have developed and implemented various prevention activities at military- service installations. However, these installation-developed activities may not be consistent with DOD’s prevention strategy because DOD and the services have not communicated the purpose of the strategy and disseminated it to the installation-based personnel responsible for developing and implementing activities at the local level. Further, the military services’ SAPR policies—key conduits of such communication— have not been updated to align with the guidance in the strategy. During visits to selected installations, we also found that there is limited collaboration taking place on the prevention activities developed locally, which could further affect the effectiveness and efficiency of the department’s efforts to prevent sexual assault within the military. DOD and the military services are in the process of implementing the prevention-focused activities noted in DOD’s 2014-16 Sexual Assault Prevention Strategy. In its strategy, DOD specifies that one of the department’s goals is to deliver consistent and effective sexual-assault prevention methods and programs. In doing so, DOD believes that it will help to instill a culture of mutual respect and trust, professional values, and team commitment, which are reinforced to create an environment where sexual assault is not tolerated, condoned, or ignored. To achieve this goal, DOD identified 18 activities in its prevention strategy as well as their general time frame for completion, their priority relative to the overall strategy, and the office with primary responsibility for their implementation (i.e., SAPRO, military department, or service). For example, one activity assigns SAPRO responsibility for developing a military community of practice focused on primary prevention within 1 year of the strategy’s April 2014 publication. Other activities, such as implementing policies that appropriately address high-risk situations targeted by offenders are designated as responsibilities of the military services and are to be completed within 3 years of the strategy’s implementation. Table 2 provides a comprehensive list of the 18 prevention-focused activities identified in DOD’s 2014-16 Sexual Assault Prevention Strategy as well as the time frames in which they are to be implemented and the offices responsible for their implementation. According to SAPRO, the office responsible for developing the department’s strategy, 2 of the 18 activities identified in the strategy have been fully implemented. Specifically, SAPRO officials said its activities “Implementation of the 2014-16 Sexual Assault Prevention Strategy” and “Develop a military community of practice focused on primary prevention of sexual assault” are complete and the remaining 16 activities are ongoing. Officials stated that the implementation of the 2014-16 Sexual Assault Prevention Strategy activity was completed with its publication in April 2014. SAPRO also noted that a military community of practice was started with the August 2014 implementation of “SAPR Connect”—an online community in which DOD SAPR personnel can collaborate and share ideas, news, research, and insights from experts on issues related to sexual assault. In addition to the 2 activities it identified as implemented, efforts are under way to implement the remaining 16. However, SAPRO officials said that the remaining 16 activities identified in its strategy will never be considered “complete” because, as the program develops, the department will consistently revise and renew its approach in these areas. As such, officials stated that the status of the remaining 16 activities is, and will indefinitely remain, designated as “ongoing.” Though the remaining activities are not considered complete, each service has taken steps to support the ongoing efforts specified in the department’s strategy. For example, each military service annually administers sexual- assault prevention and response training that addresses the nature of sexual assault in the military environment using scenario-based, real-life situations to demonstrate the entire cycle of prevention, reporting, response, and accountability procedures. In addition, each service has developed and implemented its own prevention-focused training. For example, in the spring of 2014, the Air Force held a SAPR Stand-Down Day focused on teaching airmen to identify sexual-assault offenders by showing how they operate and to impart the effect that offenders can have on their victims. In 2014, the Marine Corps also expanded its SAPR training efforts to include courses that emphasize character, social courage, and mutual respect among Marines. Specifically, the Marine Corps instituted a 2-hour ethics course of instruction for new recruits who are awaiting travel to their initial military training, which focuses on developing an understanding of sexual assault, harassment, hazing, and alcohol abuse. The services have also taken steps to address the activity that directs them to review and, if necessary, expand alcohol policies to address factors beyond individual use. For example, some Army installations have adopted more stringent alcohol policies, such as limiting the amount of alcohol that soldiers may have in the barracks or purchase from installation facilities, and the Navy took steps to improve its training of alcohol providers and to engage local-community leadership and organizations to expand prevention efforts off base. In addition, the Marine Corps restricted on-base retail alcoholic beverage sales to the hours of 8:00 a.m. to 10:00 p.m. and limited its availability in non– package stores to no more than 10 percent of the total retail selling floor space, while the Air Force revised its alcoholic beverage policy to deglamorize behavior associated with excessive drinking. The military services have developed and implemented activities at the installation level, independent of DOD’s prevention strategy, in an effort to prevent sexual assault. DOD acknowledged that the 18 activities in the 2014–16 prevention strategy are not the only required prevention activities and encouraged the services to develop their own specific activities. However, DOD also noted that the objectives of DOD’s prevention strategy are to achieve unity of effort and purpose across all of DOD in the execution of sexual-assault prevention. During our visits to selected installations, we found that program personnel were largely unfamiliar with DOD’s prevention strategy and hence may not be implementing activities in a manner consistent with the objectives of DOD’s strategy. In its 2014-16 prevention strategy, DOD highlights that it is important for leaders to employ targeted interventions, standards, and messaging to address issues unique to their unit climate, and that prevention programs should be tailored to specific audiences and for specific purposes and circumstances. However, DOD also notes that the strategy provides a framework, means, ways, and supporting end states to assist leaders and planners in the development of appropriate activities. SAPRO officials stated that they have implemented several initiatives to communicate directly with the SAPR Program Leads on prevention strategy as well as servicemembers in the field. For example, as of October 1, 2015 SAPRO has provided workshops to SARCs from the Navy and Marine Corps on implementation of the prevention strategy via webinar or face-to-face to help participants translate the strategy into action. Other workshops’ dates are being finalized. During the course of our review, we met with military officials and program personnel from a joint base and three service-specific installations who described prevention activities that had been developed locally and were not listed in DOD’s strategy. The efforts at these installations included a variety of activities ranging from displays of sexual-assault awareness symbols to service-sponsored sporting events that were generally based on the theme of preventing sexual assault. Despite their responsibilities for and experience with coordinating and implementing prevention-focused activities, the program personnel we met with consistently said that DOD and their respective services had not communicated and disseminated guidance to them on the department’s prevention strategy and that they were generally unaware of how the department’s 2014–16 prevention strategy related to their development and implementation of sexual-assault prevention activities. In the absence of such guidance, we discussed with SAPR personnel at the installations we visited their processes for determining which prevention-focused activities to sponsor. For example, we spoke with program personnel at one installation who said that while they were aware of DOD’s prevention strategy, they had not received any headquarter-level guidance or direction on the types of activities they should sponsor in support of their efforts to prevent sexual assault. At another installation, we spoke with program personnel who stated they didn’t think that the communication flow between the headquarters-level SAPR office and the installation was as fast or as formal as it needs to be to address a constantly changing program. In addition, SAPR personnel provided a briefing on their program in which prevention-focused activities were categorized according to the five domains identified in DOD’s 2014- 16 prevention strategy. When we asked how they became familiar with DOD’s prevention strategy, a program official said she had found the information during a self-initiated search through documents on SAPRO’s website. Such an action is noteworthy; however, without a plan to communicate the prevention strategy and roles and responsibilities for its implementation, DOD and the military services cannot be sure that all installation-based program personnel are implementing activities that are designed to achieve the goals and objectives of the department’s prevention strategy. We also found that the services’ SAPR policies—a key conduit for communicating a program’s purpose and corresponding roles and responsibilities to relevant personnel—have not been updated to reflect the tenets of DOD’s most recent prevention strategy. DOD’s 2014-16 Sexual Assault Prevention Strategy specifies that one of the department’s objectives is to achieve unity of effort and purpose across all of DOD in the execution of sexual-assault prevention and also directs the DOD components and the Secretaries of the military departments to align their implementing plans and policies with the department’s prevention strategy. While the services’ SAPR policies generally address the prevention of sexual assault, they have not been updated to align with and operationalize the principles outlined in DOD’s most recent prevention strategy. Specifically, the Army and Air Force have revised their policies after the issuance of DOD’s 2014–16 prevention strategy, but neither incorporates specific elements of DOD’s prevention strategy. The Navy and Marine Corps SAPR policies were issued in 2013 prior to the issuance of DOD’s 2014–2016 prevention strategy and have yet to be updated.. We recognize that DOD’s most recent prevention strategy was published approximately a year and a half ago and that it takes time for its prevention strategy to take root in an organization as large as DOD. However, there may be a disconnect between DOD’s SAPR policies and what is being implemented by the services—something that has previously been identified within the department as a challenge. Notably, in May 2012, the Joint Chiefs of Staff issued its Strategic Direction to the Joint Force on Sexual Assault Prevention and Response in which it noted that evidence clearly indicated that gaps remain between the precepts of the DOD’s SAPR program and its full implementation at command and unit levels. Thus, without SAPR policies that are aligned with the department’s prevention strategy, the military services will be limited in their ability to promote consistency in the prevention efforts that are being developed and implemented throughout DOD. During site visits to a joint base and three service-specific installations in the same geographic location, we found limited collaboration among the services on their efforts to prevent sexual assault. DOD’s 2014-16 Sexual Assault Prevention Strategy directs the military services to collaborate so they can capture and share best practices and lessons learned related to the prevention of sexual assault. This direction is further reinforced in both the 2013 and 2015 versions of its SAPR strategic plan, which note that it is the department’s objective to deliver consistent and effective prevention methods and programs. Further, the May 2012 Strategic Direction from the Joint Chiefs of Staff, which predates DOD’s current prevention strategy, directed commanders and leaders across the military services to synchronize their respective sexual-assault prevention and response programs to increase unity of effort through a joint perspective and consistent application of prevention, intervention, and response. During our site visits, we met with military officials and SAPR program personnel who consistently acknowledged the need to improve cross- service collaboration on the prevention of sexual assault. However, they added that the different structures and processes of their respective services’ SAPR programs complicated such collaboration. For example, during a visit to an Army base, program personnel informed us of an attempt to collaborate with the other services on SAPR activities. However, they added that the other services declined to collaborate because the other services, whose programs were solely focused on addressing sexual assault, thought it would be confusing to collaborate with the Army since their program now addressed both sexual harassment and assault. During a visit to another installation, a military official stated that the extent of cross-service collaboration on SAPR is based on the individuals involved and the level of importance that they place on pursuing joint activities. The official added that he was not aware of any overarching headquarters-level guidance that promoted such collaboration when the cross-service relationship and desire to work together did not exist. In addition to the structural differences of each service, program personnel said that they do not have the number of personnel needed to cultivate more cross-service SAPR activities. Specifically, program personnel said that their SAPR offices were consistently understaffed and that the staff who are available are focused on the needs of their respective service’s program. For example, we met with SAPR personnel at one installation who said that there is one SARC and one victim advocate assigned to serve a population of 1,200 servicemembers. Additionally, SAPR personnel from another installation said that it can be difficult to maintain a sufficient number of SAPR personnel because many of their staff and volunteers are servicemembers who become unavailable, and in some cases, not replaced when they deploy. During the course of our review, we met with headquarters-level officials in SAPRO who explained that, during joint-base negotiations, the services decided that SAPR programs would remain separate. Specifically, SAPRO officials said that while the Army has made an effort to develop joint response centers, all of the military services wanted procedures, such as sexual-assault investigations, to be handled by their respective service. SAPRO officials stated, however, that one of the assessment tasks in its strategic plan is to conduct a review of joint environments and that they have added questions on joint basing for the military services to respond to and include as part of their input to DOD’s Annual Report on Sexual Assault in the Military. DOD has identified performance measures to assess the extent to which its prevention efforts are achieving its goal to eliminate sexual assault in the military, but these measures are missing many of the 10 key attributes that our prior work has shown can contribute to assessing program performance effectively. Specifically, DOD has identified 12 performance measures that it will use to assess the overall effectiveness of its sexual- assault prevention and response program, and 5 of these measures are specifically designed to gauge the effectiveness of its prevention line of effort. While all 5 of DOD’s prevention-focused measures demonstrate some of the key attributes, collectively they are missing more than half of these attributes. DOD has recently identified a new set of performance measures to assess its efforts to prevent sexual assault. Since 2005, DOD’s SAPR policy has required that the Under Secretary of Defense for Personnel and Readiness develop metrics to measure compliance and effectiveness of training, awareness, prevention, and response policies and programs. Since that time, we have recommended, among other things, that DOD develop an oversight framework that contained performance goals, strategies to be used to accomplish goals, and criteria to measure the progress of its prevention and response efforts. In October 2009, Congress required DOD to submit a revised SAPR implementation plan to include, among other things, methods to measure the effectiveness of plans that implement DOD policies regarding sexual assaults involving members of the armed forces. In response, in April 2010, DOD conceptualized several measures and further directed in its 2013 SAPR strategic plan that they be developed. In its Annual Report on Sexual Assault in the Military Services for Fiscal Year 2013, DOD identified six performance measures, referred to as SAPR Metrics 1.0, that had been developed to measure the effectiveness of its SAPR program. However, none of the six performance measures were developed specifically to assess DOD’s progress toward preventing sexual assault. More recently, the President directed the Secretary of Defense to develop a comprehensive report on major improvements to DOD’s SAPR program since August 2013 and to identify clear benchmarks and metrics that will enable the department to measure the effectiveness of its SAPR efforts. In response to this direction, DOD collaborated with the White House and identified 12 performance measures that the department plans to use to assess the effectiveness of its SAPR program and that were included in DOD’s November 2014 report to the President. Of the 12 performance measures, 5 were designed to specifically measure the effectiveness of DOD’s prevention line of effort. Table 3 further describes DOD’s 5 prevention-focused performance measures. We analyzed DOD’s five prevention-focused measures and found that they are missing many of the 10 key attributes that contribute to assessing program performance effectively. Our prior work has shown that agencies successful in measuring performance used measures that demonstrated results, were limited to the vital few, covered multiple priorities, and provided information that was useful for decision making. To determine whether DOD’s prevention-focused performance measures satisfy these four general characteristics, we assessed the measures using 10 specific attributes. Our work cited these specific attributes as key to successful performance measures. Table 4 shows the 10 attributes, their definitions, and the potentially adverse consequences of not having the attributes. Our analysis determined that all five of DOD’s prevention-focused performance measures demonstrate some of the key attributes, but collectively they are missing more than half of the key attributes of successful performance measures that we identified in our prior work. Specifically, DOD’s performance measures have linkage in that they are aligned with the prevention line of effort set forth in DOD’s 2014–16 prevention strategy, include baseline and trend data, and exhibit little to no overlap with other measures. We also found, however, that DOD’s prevention-focused performance measures’ usefulness to the department may be limited because they each do not have between 5 and 7 of the 10 key attributes that we identified as necessary to successfully measure program performance. Table 5 shows the results of our evaluation of DOD’s prevention-focused performance measures using the 10 key attributes of successful performance measures. As shown in table 5, all of DOD’s prevention-focused performance measures are missing the attribute of measurable targets. Leading practices in federal agency performance management state that, where appropriate, performance measures should have quantifiable, numerical targets and that agencies could use baselines to set realistic but challenging targets. As noted previously, DOD has established baseline and trend data for each of its prevention-focused performance measures, but none of these measures have measurable targets because it has not used these data to set numerical goals nor has it provided the information needed to appropriately interpret the results of the measures and determine program achievements. For example, for its “Prevalence versus Reporting” measure, while DOD has expressed that it aims to close the gap by decreasing the prevalence of sexual-assault incidents and increasing the number of victims willing to report a sexual assault, it does not identify—in either case—a numerical target for the department to work towards. DOD officials told us that the department has not established numerical targets for its prevention-focused performance measures, because it instead uses indicators such as positive results from surveys or a decrease in the sexual-assault prevalence rate when compared to previous years as measures of success. Further, officials stated they have not established numerical targets because there is not enough research to determine what an appropriate target should be for its measures related to prevalence. We recognize the challenges associated with measuring the progress of activities with complex outcomes and limited examples to replicate, such as DOD’s efforts to prevent sexual assault. In these instances, our prior work on effective agency strategic reviews has shown that setting measurable targets is an evolutionary process involving trial and error and that agencies may need to break their strategic objectives into pieces that can be more easily be measured or assessed. Further, for activities with long-term, scientific discovery–oriented outcomes, agencies can also rely on underlying multiyear performance goals, annual performance indicators, and milestones to better plan for and understand near-term progress towards those objectives. Without a numerical target, DOD and other decision makers may be unable to gauge the extent of progress from the department’s prevention efforts because there are no goals that can be used to compare projected performance with actual results. Furthermore, without targets against which it can measure its progress, DOD may not be able to ensure that it is allocating resources to its most effective activities—a key determination given the increasingly limited fiscal resources across the federal government. Our analysis also determined that all five of DOD’s prevention-focused performance measures are missing the attribute of clarity because the corresponding methodology is not clearly defined. For example, DOD did not specify that it would assess program performance by gender or rank; however, we found that three of DOD’s five prevention-focused performance measures assessed performance both by gender and by rank, while another was focused solely on results broken out by servicemember gender. According to a senior official, DOD chooses to measure performance by gender and rank because data show that women and junior enlisted servicemembers in general are at higher risk of sexual assault. However, DOD did not use gender or rank when calculating its “Prevalence versus Reporting” measure. In our March 2015 report focusing on male-servicemember sexual-assault victims, we reported that developing clear goals and associated metrics related to male victims and articulating them throughout the department would provide DOD with additional information to assess its progress and determine whether any adjustments are needed in its approach for addressing sexual assault in the military. Similarly, DOD would benefit in developing clear goals and associated performance measures by gender and rank so it can effectively assess its progress of preventing sexual assaults from occurring in the military. Further, DOD’s Prevalence versus reporting measure is not clearly defined. Specifically, the measure’s name and definition suggests that the department intends to compare the estimated prevalence of unwanted sexual contact with the number of sexual assaults reported by servicemembers while serving in the military by fiscal year. However, DOD’s annual data on sexual-assault incidents reported to the military services by fiscal year include assaults that occurred prior to the fiscal year in which they were reported. As a result, DOD is comparing the prevalence of unwanted sexual contact that occurred in the past year to reported sexual assaults, regardless of when they occurred. Given the lack of available data on the fiscal year in which the sexual-assault incidents occurred, we are unable to accurately compare the prevalence of unwanted sexual contact to reported sexual assault by fiscal year. However, our analysis of DOD’s annual reports since fiscal year 2008 shows an increase in the percentage of unrestricted reports of sexual assault made for an incident that occurred prior to the fiscal year it was reported, with about 12 percent reporting a prior-year incident in fiscal year 2008 compared to at least 24 percent reporting a prior-year incident in fiscal year 2013. The lack of clarity of what actually is being measured may lead decision makers to believe that performance was better or worse than it actually was. Our analysis also determined that objectivity and reliability may be limited for three of the five prevention-focused performance measures, because they are based on the results of a convenience sample of servicemembers who respond to the Defense Equal Opportunity Management Institute (DEOMI) Organizational Climate Survey request and voluntarily complete the command-climate survey. As such, the aggregated results are not generalizable to the larger servicemember population. According to a senior DOD official, DOD is still determining the usefulness of using the command-climate survey at the department level by exploring ways to make the results more representative and meaningful at the department level. Performance measures lacking objectivity and reliability may affect the conclusions about the extent to which progress has been made. Additionally, our analysis determined that DOD’s overall suite of prevention-focused performance measures does not identify core program activities that relate to its prevention efforts, does not address government-wide priorities such as cost, quality, and timeliness, and, as a result, is not balanced among priorities. For example, DOD identifies SAPR education and training as a key component of its prevention program, but it is unclear how DOD will determine their effectiveness given that none of DOD’s measures are designed to gauge the effectiveness of such activities. Senior DOD officials acknowledged that DOD has more work to do on refining its sexual-assault prevention metrics. However, until DOD has fully developed its prevention-focused performance measures, DOD and other decision makers may be unable to effectively gauge the progress of the department’s prevention efforts. Since our first report in 2008 on sexual assault in the military, DOD has made progress in improving its efforts to prevent and respond to sexual assault across the department. For example, to further develop its strategy to prevent sexual assault, DOD consulted with CDC and incorporated CDC’s framework and prevention-related concepts into its prevention strategy. This included, among other things, defining the different levels at which sexual-assault prevention efforts occur and describing the importance of identifying and understanding the domains in which sexual violence takes place. However, DOD’s 2014-16 Prevention Strategy does not provide a linkage between its prevention-focused activities and their desired outcomes and it does not identify risk factors for two of its domains. These actions could help DOD ensure that it is taking an evidence-based approach to the prevention of sexual assault. Further, the prevention strategy has not been systematically communicated or disseminated to the installation-based program personnel responsible for its implementation, and the services’ SAPR policies—another means for communicating direction to program personnel—have not been aligned with the strategy to reinforce its purpose. Finally, while DOD has identified performance measures, the measures are not, in all cases, in line with the key attributes of successful performance measures, which make it difficult for the department to reliably determine which activities are helping to prevent sexual assault. Without fully developing its strategy to prevent sexual assault by linking prevention activities to desired outcomes, identifying risk factors for all domains, and including fully developed performance measures, leadership at all levels of DOD may face challenges in determining the best prevention efforts to implement in order to prevent sexual assault. Further, without communicating, disseminating, and aligning the department’s overarching strategy to prevent sexual assault with the installation level of the military services, DOD could encounter difficulties in carrying out its vision to eliminate sexual assault in the military. Lastly, at the three service-specific and one joint installation we visited, we found challenges related to collaboration in implementing sexual-assault prevention activities across the services. While this may not be indicative of all service-specific or joint installations, it may require DOD’s attention in the future. To improve the effectiveness of DOD’s strategy for preventing sexual assault in the military, we recommend that, as part of the department’s next biennial update to the 2014–16 sexual-assault prevention strategy, the Secretary of Defense direct the Under Secretary of Defense for Personnel and Readiness, in conjunction with the Secretaries of the military departments, take the following five actions: link sexual-assault prevention activities with desired outcomes, and identify risk and protective factors for all of its domains, including the military community and its leaders. To help ensure widespread adoption and implementation of DOD’s sexual-assault prevention strategy and to fulfill its role as a framework that can assist leaders and planners in the development of appropriate tasks, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Personnel and Readiness, in conjunction with the Secretaries of the military departments, to communicate and disseminate DOD’s prevention strategy and its purpose to the appropriate levels of program personnel as well as their roles and responsibilities for its implementation, and ensure the military services’ SAPR policies are aligned with the department’s prevention strategy. To help improve DOD’s ability to measure the effectiveness of the department’s efforts in preventing sexual assault in the military, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Personnel and Readiness, in collaboration with the Secretaries of the military departments, to fully develop the department’s performance measures for the prevention of sexual assault so that the measures include all key attributes of successful performance measures. In written comments on a draft of this report, DOD concurred with each of our five recommendations. DOD’s comments are summarized below and reprinted in appendix IV. DOD also provided technical comments on the draft report, which we incorporated as appropriate. DOD concurred with our first and second recommendations that seek to improve the effectiveness of the department’s strategy for preventing sexual assault by linking sexual assault prevention activities with desired outcomes and identifying risk and protective factors for all of its domains, including the military community and its leaders. In its comments, DOD agreed that linking prevention activities to desired outcomes is important and stated that it had developed a methodology to identify the full range of risk and protective factors for the five domains in which it will focus its efforts to prevent sexual assault. DOD also stated that in future phases of the study, it will attempt to link future prevention activities with changes in indicators for risk and protective factors as well as the occurrence of sexual assault. We are encouraged by the efforts DOD has underway to more comprehensively identify risk and protective factors and believe that these efforts will better position the department to focus on eliminating factors that promote sexual assault and to support the factors that may prevent it. Regarding the implementation of DOD’s prevention strategy and its use as a framework to develop appropriate tasks, DOD concurred with our third and fourth recommendations that it communicate and disseminate the prevention strategy and its purpose to the appropriate levels of program personnel as well as their roles and responsibilities for its implementation, and ensure the military services’ SAPR policies are aligned with the department’s prevention strategy. In its comments, DOD identified several efforts that it had initiated related to the communication and implementation of its sexual assault prevention strategy. For example, DOD described a prevention roundtable that includes representatives from the military services, the National Guard Bureau, and the Coast Guard and meets quarterly to collaborate on sexual assault prevention requirements and to share their efforts to prevent sexual assault. DOD also highlighted that, for more than 2 years, it has hosted quarterly webinars on topics that can assist with the implementation of its prevention strategy. We are encouraged by the variety of forums that DOD sponsors to facilitate information-sharing on prevention initiatives and, in particular, its recent institution of workshops that help participants to operationalize the prevention strategy. However, as we noted in our report, it is important for DOD to develop a plan for communicating the prevention strategy and its purpose to all personnel to help ensure that it achieves its goal of department-wide unity of effort in the prevention of sexual assault. With regard to the department’s ability to measure the effectiveness of its efforts to prevent sexual assault, DOD concurred with our fifth recommendation that it fully develop its performance measures for the prevention of sexual assault so that they include all key attributes of successful performance measures. In its comments, DOD stated that CDC and leading researchers have recognized sexual assault is a non- standard public health issue that requires different metrics to determine the causal linkages that exist between preventive indicators and prevention-related outcomes. DOD also stated that it would continue to monitor best practices in civilian prevention initiatives, and translate them to military populations as appropriate. As noted in our report, we recognize the challenges associated with measuring the progress of activities with complex outcomes, such as DOD’s efforts to prevent sexual assault. In these instances, our prior work on effective agency strategic reviews has shown that the development of performance measures is an evolutionary process that involves trial and error, particularly for activities with long-term, scientific discovery-oriented outcomes such as the prevention of sexual assault. We also recognize the difficulties that are posed to emergent initiatives—such as the prevention of sexual assault in the military—by the limited number of examples from civilian initiatives that may exist to be replicated. However, given the substantive differences between military and civilian culture, we encourage DOD to pioneer measures that will most effectively depict the department’s performance. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, the Under Secretary of Defense for Personnel and Readiness, the Secretaries of the Army, the Navy, and the Air Force, and the Commandant of the Marine Corps. In addition, this report will also be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-3604 or farrellb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. In a December 2013 letter, the President of the United States directed the Secretary of Defense to provide a comprehensive report—by the following December—on the Department of Defense’s (DOD) progress in addressing the issue of sexual assault. Specifically, the report was to address major programmatic improvements made by DOD since August 2013, including those related to the prevention of sexual assault. Accordingly, in November 2014, DOD submitted its report to the President in which it noted that the department had increased its focus on prevention and had demonstrated progress in preventing sexual assault, and that it planned to intensify its prevention-focused efforts in the coming years. Table 6 provides further details about the prevention-focused efforts highlighted in DOD’s November 2014 report. Appendix II: Timeline of Selected GAO Reports and DOD and Congressional Actions on Sexual Assault Prevention and Response in the Military We also made one recommendation related to the Coast Guard’s efforts to prevent and respond to incidents of sexual assault. To determine the extent to which the Department of Defense (DOD) developed an effective strategy to prevent sexual assault in the military, we obtained and reviewed DOD’s 2014-16 Sexual Assault Prevention Strategy. We also obtained and reviewed DOD’s 2008 Sexual Assault Prevention Strategy, its 2012 Strategic Direction to the Joint Force on Sexual Assault Prevention and Response, its 2013 Sexual Assault Prevention and Response Strategic Plan, and relevant provisions in DOD and military service policies and guidance pertaining to the prevention of sexual assault incidents. We interviewed officials in the Under Secretary of Defense for Personnel and Readiness’s Sexual Assault Prevention and Response Office (SAPRO) as well as SAPR program officials with the Army, the Navy, the Marine Corps, and the Air Force to obtain an understanding of their respective roles in developing DOD’s prevention strategy and the extent to which current military-service policies and guidance are consistent with the department’s goals and objectives for preventing sexual assault. We also interviewed officials from the Centers for Disease Control and Prevention (CDC) about their work developing and evaluating sexual-violence prevention programs and we reviewed and used CDC’s social-ecological model and public-health model to evaluate the extent to which DOD identified elements such as domains, risk factors, and protective factors. Further, we used CDC’s program planning and development model to assess the extent to which DOD’s 2014–16 prevention strategy and related documents contain all of the elements of a framework for effective sexual-violence prevention programs identified by CDC. We used Office of Management and Budget issued guidance on the budget preparation, submission, and execution, which, among other things, includes information regarding agency strategic reviews and mitigating risks related to achieving strategic objectives and performance goals. We also used our prior work on effective agency strategic reviews, which has shown that it is important to review progress toward strategic objectives in that it can help to determine subsequent actions and that leaders and responsible managers should be held accountable for knowing the progress being made in achieving outcomes. We compared DOD’s strategy with prior related GAO reports to determine the extent to which the strategy addressed any of our previous recommendations on the prevention of sexual assault in the military. We discussed the results of our analyses with officials in DOD SAPRO and officials in each of the military services responsible for developing and implementing DOD’s strategy to prevent sexual assault in the military. To determine the extent to which DOD implemented activities department-wide and at service-specific and joint installations related to the department’s efforts to prevent sexual assault in the military, we reviewed DOD’s 2008 and 2014-16 Sexual Assault Prevention Strategies, its 2012 Strategic Direction to the Joint Force on Sexual Assault Prevention and Response, its 2013 Sexual Assault Prevention and Response Strategic Plan, and relevant provisions in DOD and military service policies and guidance and the National Defense Authorization Acts for fiscal years 2004–2015 to identify any required prevention activities. We also used CDC’s evaluation model for public health programs to assess whether DOD’s 2014-16 Sexual Assault Prevention Strategy included the six key elements of effective public health strategies identified by CDC. To determine the extent to which the military services have implemented activities to prevent sexual assault, we visited three service-specific installations—Fort Shafter (Army), Schofield Barracks (Army), and Marine Corps Base Hawaii—and 1 joint base—Joint Base Pearl Harbor-Hickam (Navy and Air Force)—on Oahu, Hawaii. We chose these locations based on the reported high numbers of unrestricted reported sexual assaults relative to other installations within the same branch of military service and their close proximity to each other. During these visits, we met with military officials and program personnel who were identified as having a role in preventing sexual assault, including commanders, sexual-assault response coordinators, sexual harassment/assault response and prevention program managers, victim advocates, chaplains, criminal investigators, legal personnel, and medical and mental health-care providers to discuss their familiarity with DOD’s prevention strategy and whether it had a role in the prevention activities sponsored at their respective installations. We also discussed the extent of cross-service collaboration on sexual-assault prevention activities at these installations and compared them with guidance from the Joint Chiefs of Staff and leading practices on interagency collaboration to determine whether the military services have taken the steps necessary to effectively collaborate on similar prevention efforts. To determine the extent to which DOD has developed performance measures to assess the effectiveness of its efforts to prevent sexual assault in the military, we reviewed DOD’s 2013 Sexual Assault Prevention and Response Strategic Plan, its 2014 Report to the President of the United States on Sexual Assault Prevention and Response, its annual report on sexual assault in the military for fiscal year 2014, its 2014-16 Sexual Assault Prevention Strategy, and other related documents to identify performance measures that the department uses or plans to use to assess its progress in preventing sexual assault in the military. Additionally, we met with DOD and military service officials to verify the performance measures identified and to discuss how they will be used to assess the effectiveness of the department’s prevention efforts. We also compared DOD’s five prevention-focused performance measures with GAO criteria on key attributes of successful performance measures. We conducted this performance audit from August 2014 to October 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, key contributors to this report were Kimberly Mayo, Assistant Director; Tracy Barnes; Elizabeth Curda; Marcia Fernandez; Mae Jones; Ron La Due Lake; Amie Lesser; Brian Pegram; Garrett Riba; and Stephanie Santoso. Military Personnel: Actions Needed to Address Sexual Assaults of Male Servicemembers. GAO-15-284. Washington, D.C.: March 19, 2015. Military Personnel: DOD Needs to Take Further Actions to Prevent Sexual Assault during Initial Military Training. GAO-14-806. Washington, D.C.: September 9, 2014. Military Personnel: DOD Has Taken Steps to Meet the Health Needs of Deployed Servicewomen, but Actions Are Needed to Enhance Care for Sexual Assault Victims. GAO-13-182. Washington, D.C.: January 29, 2013. Military Personnel: Prior GAO Work on DOD’s Actions to Prevent and Respond to Sexual Assault in the Military. GAO-12-571R. Washington, D.C.: March 30, 2012. Preventing Sexual Harassment: DOD Needs Greater Leadership Commitment and an Oversight Framework. GAO-11-809. Washington, D.C.: September 21, 2011. Military Justice: Oversight and Better Collaboration Needed for Sexual Assault Investigations and Adjudications. GAO-11-579. Washington, D.C.: June 22, 2011. Military Personnel: DOD’s and the Coast Guard’s Sexual Assault Prevention and Response Programs Need to Be Further Strengthened. GAO-10-405T. Washington, D.C.: February 24, 2010. Military Personnel: Additional Actions Are Needed to Strengthen DOD’s and the Coast Guard’s Sexual Assault Prevention and Response Programs. GAO-10-215. Washington, D.C.: February 3, 2010. Military Personnel: Actions Needed to Strengthen Implementation and Oversight of DOD’s and the Coast Guard’s Sexual Assault Prevention and Response Programs. GAO-08-1146T. Washington, D.C.: September 10, 2008. Military Personnel: DOD’s and the Coast Guard’s Sexual Assault Prevention and Response Programs Face Implementation and Oversight Challenges. GAO-08-924. Washington, D.C.: August 29, 2008. Military Personnel: Preliminary Observations on DOD’s and the Coast Guard’s Sexual Assault Prevention and Response Programs. GAO-08-1013T. Washington, D.C.: July 31, 2008. Military Personnel: The DOD and Coast Guard Academies Have Taken Steps to Address Incidents of Sexual Harassment and Assault, but Greater Federal Oversight Is Needed. GAO-08-296. Washington, D.C.: January 17, 2008. | Sexual assault is a crime that devastates victims and has a far-reaching negative impact for DOD because it undermines DOD's core values, degrades mission readiness, and raises financial costs. DOD data show that reported sexual assaults involving servicemembers more than doubled from about 2,800 reports in fiscal year 2007 to about 6,100 reports in fiscal year 2014. Based on results of a 2014 survey, RAND estimated that 20,300 active-duty servicemembers were sexually assaulted in the prior year. Senate Report 113-176 includes a provision for GAO to review DOD's efforts to prevent sexual assault. This report addresses the extent to which DOD (1) developed an effective prevention strategy, (2) implemented activities department-wide and at military installations related to the department's effort to prevent sexual assault, and (3) developed performance measures to determine the effectiveness of its efforts to prevent sexual assault in the military. GAO evaluated DOD's strategy against CDC's framework for effective sexual-violence prevention strategies, reviewed DOD policies, and interviewed cognizant officials. The Department of Defense (DOD) developed its strategy to prevent sexual assault using the Centers for Disease Control and Prevention (CDC) framework for effective sexual-violence prevention strategies, but DOD does not link activities to desired outcomes or fully identify risk and protective factors. Specifically, DOD's strategy identifies 18 prevention-related activities, but they are not linked to desired outcomes—a step that CDC says is necessary to determine whether efforts are producing the intended effect. CDC has also demonstrated that by identifying risk and protective factors—relative to the domain or environment in which they exist—organizations can focus efforts on eliminating risk factors that promote sexual violence while also supporting the protective factors that prevent it. DOD identifies five domains in its strategy and includes risk factors for three—individuals, relationships, and society—but it does not specify risk factors for the other two domains—leaders at all levels of DOD and the military community. Further, DOD does not specify how the protective factors, such as emotional health, identified in its strategy relate to the five domains. Thus, DOD may be limited in its ability to take an evidence-based approach to the prevention of sexual assault. DOD and the military services are in the process of implementing prevention-focused activities, but they have not taken steps to ensure that installation-level activities are consistent with the overarching objectives of DOD's strategy. DOD's strategy identifies 18 activities, 2 of which DOD considers implemented while efforts to address the remaining 16 are ongoing. For example, DOD officials report that they have implemented the activity directing the development of a military community of practice. Additionally, GAO identified activities that had been developed and implemented at the four installations GAO visited, but found that they may not be consistent with DOD's strategy because it has not been communicated or disseminated to the personnel responsible for implementing the activities. Further, service policies—key conduits of such communication—do not provide the guidance necessary to unify the department's prevention efforts because they have not been updated to align with and operationalize the principles outlined in DOD's most recent strategy. Thus, DOD cannot be sure that all prevention-related activities are achieving the goals and objectives of the department's strategy. DOD has identified five performance measures to assess the effectiveness of its prevention efforts, but these measures are not fully developed as they are missing many of the 10 key attributes that GAO has found can contribute to assessing program performance effectively, such as baseline and trend data, measurable target, and clarity. Specifically, all five performance measures demonstrate some of these attributes but collectively they are missing more than half of these attributes. All of the prevention efforts' measures demonstrate baseline and trend data but none of the measures have measurable target, clarity, and some of the other attributes. Without fully developed measures, DOD and other decision makers may not be able to effectively gauge the progress of the department's prevention efforts. GAO recommends that DOD link prevention activities with desired outcomes; identify risk and protective factors for all domains; communicate and disseminate its strategy to all program personnel; align service policies with the strategy; and fully develop performance measures. DOD concurred with all recommendations and noted actions it was taking. |
Harmful biological agents can be released by way of the air, food, water, or insects. Their release may not be recognized for several days, during which time a communicable disease—such as smallpox—can spread to others who were not initially exposed. Some biological agents—such as anthrax and plague—produce symptoms that can easily be confused with influenza or other, less virulent illnesses, leading to a delay in diagnosis or identification. For example, the recent outbreak of the new infectious disease, SARS, whose onset includes common symptoms such as high fever, coughing, and difficulty in breathing, was not recognized until about 4 months after the first known case. Initial response to a public health emergency, including an act of bioterrorism, is generally a local responsibility that could involve multiple jurisdictions in a region, with states providing additional support when needed. Since clinicians at the local level are most likely to be the first ones to detect an incident, they and local public health officials are expected to report incidents or symptoms of suspicious illness to the state health department and other designated parties. States can provide supporting personnel, financial resources, laboratory capacity, and other assistance to local responders. Because of the many participants involved, the identification and management of bioterrorism and other public health emergencies call for effective communication and collaboration across all levels of government and the private sector. Figure 1 presents the probable series of responses to the release of a biological agent by the various players. Prior to the anthrax incidents in October 2001, a number of threats and hoaxes involving biological agents, and at least one successful bioterrorist act, had occurred domestically. Since that time, health care and public health officials at the federal, state, tribal, local, and international levels, as well as the private sector—part of a complex network of people, systems, and organizations—have examined their readiness to respond to acts of bioterrorism and have found weaknesses. Among others, these weaknesses include (1) vulnerable and outdated health information systems and technologies, (2) lack of real-time surveillance and epidemiological systems, (3) ineffective and fragmented communications networks, (4) incomplete domestic preparedness and emergency response capability, and (5) communities without access to essential public health services. These reported deficiencies at local, state, and federal levels may hinder the effective detection and identification of a potentially harmful biological agent. The broad scope of bioterrorism activities brings together different professional communities with very diverse areas of expertise—the public health and medical community, the scientific community, and the intelligence and law enforcement community. The public health and medical community—consisting of public health officials, clinicians, traditional first responders, and veterinary and agricultural communities— is responsible for protecting the health of people, animals, and agricultural products. The scientific community—consisting of human, microbial, animal, plant, and environmental researchers, among others— characterizes, develops detection systems for, and creates vaccines and treatments for diseases caused by biological agents. The intelligence and law enforcement community—consisting of intelligence analysts, law enforcement officers, diplomatic officials, and military officers—monitor and deter terrorist movement and activity. In addition, other professions, such as drug store pharmacists and school administrators, are being identified as new players in bioterrorism preparedness and response. Public health and private laboratories are another vital part of the surveillance network because only laboratory results can definitively identify pathogens. Every state has at least one public health laboratory to support its disease surveillance activities and other public health programs. State laboratories conduct testing for routine surveillance or as part of special clinical or epidemiological studies. Independent commercial and hospital laboratories may also share with public health agencies information they have gathered through their private surveillance efforts, such as studies of patterns of antibiotic resistance or of the spread of diseases within a hospital. In addition, commercial and hospital laboratories may be required by state law or regulation to report certain findings for public health surveillance. Federal agencies have key responsibilities for bioterrorism preparedness and response. HHS has primary responsibility for coordinating the nation’s response to public health emergencies, including bioterrorism. HHS divisions responsible for bioterrorism preparedness and response, and their primary responsibilities include: The Office of the Assistant Secretary for Public Health Emergency Preparedness coordinates the department’s work to oversee and protect public health, including cooperative agreements with states and local governments. States and local governments can apply for funding to upgrade public health infrastructure and health care systems to better prepare for and respond to bioterrorism and other public health emergencies. On May 9, 2003, HHS announced that guidelines have been released for the use of $1.4 billion allocated for bioterrorism cooperative agreements. It maintains a recently built command center, where it can coordinate the response to public health emergencies from one centralized location. This center is equipped with satellite teleconferencing capacity, broadband Internet hookups, and analysis and tracking software. CDC has primary responsibility for nationwide disease surveillance for specific biological agents, and it also provides an array of scientific and financial support for state infectious disease surveillance, prevention, and control. For example, CDC administers cooperative agreements for public health preparedness totaling $870 million for fiscal year 2003. CDC has been addressing bioterrorism preparedness and response explicitly since 1998. In April 2003, CDC opened a new emergency operations center to organize and manage all emergency operations at CDC, allowing for immediate communication between CDC, HHS, DHS, as well as federal intelligence and emergency response officials, and state and local public health officials. CDC also provides testing services and consultation that are not available at the state level; training on infectious diseases and laboratory topics, such as testing methods and outbreak investigations; and grants to help states conduct disease surveillance. In addition, CDC provides state and local health departments with a wide range of technical, financial, and staff resources to help maintain or improve their ability to detect and respond to disease threats. CDC laboratories provide highly specialized tests that are not always available in state public health or commercial laboratories, and they assist states with testing during outbreaks. These laboratories help diagnose life-threatening, unusual, or exotic infectious diseases, including those that may be caused by bioterrorist attacks, such as smallpox. CDC also conducts research to develop improved diagnostic methods, and it trains laboratory staff to use them. The Agency for Healthcare Research and Quality (AHRQ) is responsible for supporting research designed to improve the outcomes and quality of health care, reduce its costs, address safety and medical errors, and broaden access to effective services, including anti- bioterrorism research. AHRQ has initiated several major projects and activities designed to assess and enhance the linkages between the clinical care delivery system and the public health infrastructure. AHRQ-supported research focuses on emergency preparedness of hospitals and health care systems for bioterrorism and other public health events; technologies and methods to improve the linkages between the personal health care system, emergency response networks, and public health agencies; and training and information needed to prepare clinicians to recognize the symptoms of bioterrorist agents and manage patients appropriately. The Food and Drug Administration (FDA) is responsible for safeguarding the food supply, ensuring that new vaccines and drugs are safe and effective, and conducting research on diagnostic tools and treatment of disease outbreaks. It is increasing its food safety responsibilities by improving its laboratory preparedness and food monitoring inspections in accordance with the Public Health Security and Bioterrorism Preparedness and Response Act of 2002. The National Institutes of Health (NIH) is responsible for conducting medical research in its own laboratories and for supporting the research of nonfederal scientists in universities, medical schools, hospitals, and research institutions throughout the United States and abroad. Its National Institute of Allergy and Infectious Diseases has a program to support research related to organisms that are likely to be used as biological weapons. NIH is planning to implement a strategic plan for research on CDC’s category A, B, and C biological agents. A complete list of these agents is included in appendix II. The Health Resources Services Administration (HRSA) is responsible for improving the nation’s health by ensuring equal access to comprehensive, culturally competent, quality health care. Its Bioterrorism Hospital Preparedness program administers cooperative agreements, totaling $498 million, to state and local governments to support hospitals’ efforts toward bioterrorism preparedness and response. Besides HHS, other federal departments and agencies are involved in bioterrorism preparedness and response efforts, including the following: DOD, while primarily responsible for the health and protection of its service members on the battlefield, conducts research on bioterrorism preparedness and response through agencies such as the Defense Advanced Research Projects Agency. This research supports force protection and is shared with other agencies when it may benefit the civilian population. It also has civil support responsibilities through the Joint Task Force for Civil Support, the National Guard, and the Army. DOE’s national laboratories are developing new capabilities for countering chemical and biological threats, including biological detection, modeling, and prediction. EPA is responsible for protecting the nation’s water supply from terrorist attack. In January 2003, it established a new homeland security research center. The center is assessing threat management for the water supply and environmental detectors for potential use in protecting the water supply. USDA has become involved in bioterrorism preparedness and response because of the increasing realization that the food supply may become a vehicle for a biological attack. Biological attacks on the health of animals and plants are important because animals and plants can spread diseases and toxins that may be harmful to humans. VA manages one of the nation’s largest health care systems and is the nation’s largest drug purchaser. The department purchases pharmaceuticals and medical supplies for the Strategic National Stockpile and the National Medical Response Team stockpile. The Department of Veterans Affairs Emergency Preparedness Act of 2002 recently directed VA to establish at least four medical emergency preparedness centers to (1) carry out research and develop methods of detection, diagnosis, prevention, and treatment for biological and other public health and safety threats; (2) provide education, training, and advice to health care professionals inside and outside VA; and (3) provide laboratory and other assistance to local health care authorities in the event of a national emergency. At least one of VA’s new centers is to focus on biological threats. On June 12, 2002, Congress passed the Public Health Security and Bioterrorism Preparedness and Response Act of 2002. The legislation requires specific activities related to bioterrorism preparedness and response. For example, it calls for steps to improve the nation’s preparedness for bioterrorism and other public health emergencies by increasing coordination and planning for such events; developing priority countermeasures, such as the Strategic National Stockpile; and improving state, local, and hospital preparedness for and response to bioterrorism and other public health emergencies. It also requires HHS and USDA to enhance controls on dangerous biological agents and toxins to protect the safety of food, drugs, and drinking water. On November 25, 2002, Congress enacted legislation creating the new Department of Homeland Security (DHS). Consolidating the functions of 22 federal agencies, DHS’s primary missions include (1) preventing terrorist attacks in the United States, (2) reducing America’s vulnerability to terrorism, and (3) minimizing the damage from potential attacks and natural disasters. DHS was established on January 24, 2003; most of the agencies were transferred effective March 1, 2003. According to DHS, the Secretary has until January 2004 to bring all 22 agencies into the new organization. The new department is responsible for assisting all levels of government in meeting their responsibilities in domestic emergencies and other challenges—especially in dealing with incidents that are chemical or biological in nature—through planning, mitigation, preparedness, response, and recovery activities. DHS is to develop and deploy countermeasures to current and emerging terrorist threats. In conjunction with HHS, it is to coordinate the nation’s preparedness and response to bioterrorism. Two of DHS’s five divisions are to address preparedness and response to bioterrorism. The Emergency Preparedness and Response Division’s mission includes assisting all levels of government, and others, in responding to domestic emergencies; the Science and Technology program’s mission includes developing and deploying countermeasures to current and emerging terrorist threats, including bioterrorism. For fiscal year 2004, the President’s budget requested $365 million to develop and implement integrated systems to reduce the probability and consequences of a biological attack on the nation’s civilian population and agricultural system. DHS has inherited programs from other departments that have a bioterrorism role, such as USDA’s Agricultural Research Service and Animal and Plant Health Inspection Service. We have designated the implementation and transformation of DHS as high risk and have added it to our 2003 high risk list. This designation is based on three factors. First, the implementation and transformation of DHS is an enormous undertaking that will take time to achieve in an effective and efficient manner. Second, DHS’s prospective components already face a wide array of existing management and operational challenges. Finally, failure to effectively carry out DHS’s mission would expose the nation to potentially very serious consequences. IT can play an essential role in supporting federal, state, local, and tribal governments in bioterrorism readiness efforts. Development of IT builds upon the existing systems capabilities of local and state public health agencies, not only to provide routine public health functions but also to support public health emergencies, including bioterrorism. For public health emergencies in particular, the ability to quickly exchange data from provider to public health agency—or from provider to provider—is crucial in detecting and responding to naturally occurring or intentional disease outbreaks. It allows physicians to share individually identifiable information with public health agencies for use in performing public health activities. In March 2001, CDC’s Public Health’s Infrastructure: A Status Report acknowledged several IT limitations in the public health infrastructure. For example, basic capability for disease surveillance systems to detect and analyze disease outbreaks is lacking for several reasons. First, health care providers have traditionally used paper- or telephone-based systems to report disease outbreaks to approximately 3,000 public health agencies. This is a labor-intensive, burdensome process for local health care providers and public health officials, often resulting in incomplete and untimely data. Second, not all public health agencies have access to the Internet or to secure channels for electronically transmitting sensitive data. Several categories of IT can play vital roles during the course of an event. These categories are described in a technology assessment for AHRQ that was completed by the University of California San Francisco-Stanford Evidence-based Practice Center. These categories of IT serve different but related functions and include the following: Detection—systems that consist of devices for the collection and identification of potential biological agents from environmental samples, which make use of IT to record and send data to a network. Surveillance—systems that facilitate the performance of ongoing collection, analysis, and interpretation of disease-related data to plan, implement, and evaluate public health actions. Diagnostic and clinical management—systems with potential utility for enhancing the likelihood that clinicians will consider the possibility of bioterrorism-related illness. These systems are generally designed to assist clinicians in developing a differential diagnosis for a patient who has an unusual clinical presentation. Communications—systems that facilitate the secure and timely delivery of information to the relevant responders and decision makers so that appropriate action can be taken. Supporting technologies—tools or systems that provide information for the other categories of systems (e.g., detection, surveillance, etc.). Recognizing the importance of IT to strengthening the public health infrastructure, RAND’s Science and Technology Policy Institute held a series of workshops between November 2001 and April 2002. The workshops brought together a diverse set of stakeholders to begin the process of developing an IT infrastructure that could support bioterrorism preparedness efforts across the country. During these workshops, consensus was reached on the need for an overarching IT infrastructure to prepare for and respond to bioterrorism and other public health emergencies. RAND described the different phases of a bioterrorism event and the intensity of need for IT during each phase, and it proposed that a bioterrorism event could consist of the following phases: Prevention and preparedness—includes reducing the possibility of a biological event by methods such as developing vaccines, conducting desktop exercises, and heightening alert status. Event recognition—includes monitoring and detecting the release of a biological agent or identifying the first case of an illness, by methods such as using detection devices and surveillance systems and diagnosing the first case of smallpox. Early and sustained response—includes initiating the response to the initial event and then continuing the measures required to address the longer-term impact of the exposure, such as deploying resources to contain a biological agent, identifying the source, replenishing medical supplies, ensuring surge capacity for the treatment of victims, and monitoring exposed individuals. Recovery—includes recovering after the biological threat is under control, by measures such as providing mental health support, restocking vaccine and drug reserves, and identifying lessons learned to improve future responses. According to RAND, during the course of a bioterrorism event, IT should be capable of addressing all phases of the event. Because of the dynamic and unpredictable nature of public health emergencies, various types of IT are needed during the course of an event. These systems and the intensity of their need for IT may vary from event to event, depending on the circumstances. In addition, IT components that are required for one phase may also be critical for other phases, but the intensity of need for them may vary. These needs include consideration of the phase being supported, required capabilities for each phase, and the data required at various points in time. Figure 2 illustrates the probable intensity of need for each category of IT across the different phases. The six key federal agencies involved in bioterrorism preparedness and response have a large number of existing and planned bioterrorism-related information systems. Specifically, these agencies identified 72 information systems and supporting technologies, as well as 12 other IT initiatives. Of the 72 information systems, 34 are surveillance systems, 18 are supporting technologies, 10 are communications systems, and 10 are detection systems. Additionally, in planning or operating each of these systems and IT initiatives, the extent of coordination or interaction performed by the lead agency with other related government agencies covered a wide range of activity. Coordination varied by system and IT initiative, ranging from absence of coordination, to awareness without coordination, to formal coordination, to joint development of initiatives. For example, about 30 percent of the information systems and IT initiatives are being either formally coordinated or jointly developed with another agency. The six federal agencies with key roles in bioterrorism preparedness and response identified 72 existing or planned information systems and supporting technologies, as well as 12 other IT initiatives. About 74 percent of these systems and IT initiatives are currently operational. The estimated costs reported for these systems exceed $63 million for fiscal year 2003. Of the 72 information systems identified, 34 are surveillance systems, 18 are supporting technologies, 10 are communications systems, and 10 are detection systems. Of the 12 IT initiatives, HHS identified 4, DOD and DOE identified 3 each, and USDA identified 2. Table 1 summarizes the number of systems by agency and IT category. Agencies identified a variety of information systems and IT initiatives, such as the following: HHS’s 28 systems are largely in operation and are used for surveillance of diseases and illnesses, as well as for communications. As the lead federal agency for protecting the health and safety of the public, CDC is responsible for most of the systems included in the HHS inventory. For example, CDC is currently implementing the Health Alert Network (HAN), an early warning and response system that is intended to provide federal, state, and local health agencies with better communications during public health emergencies; additional details are provided in appendix III. DOD, while primarily responsible for the health of its service members on the battlefield, conducts research on bioterrorism preparedness and response for force protection and shares that research with other agencies when it may benefit the civilian population. Because of the broad nature of DOD’s responsibilities, it identified 14 systems in all categories. One example of a DOD system is the Electronic Surveillance System for the Early Notification of Community-based Epidemics (ESSENCE), which supports early identification of infectious disease outbreaks in the military by comparing analyses of data collected daily with historical trends; additional details are provided in appendix III. DOE—specifically its national laboratories—has identified 14 research and development efforts for technologies to support detection systems, among others. An example is the Biological Aerosol Sentry and Information System (BASIS), a portable system of networked air- sampling units that are capable of detecting airborne biological incidents at large gatherings such as political conventions and major indoor and outdoor sporting events; additional details are provided in appendix III. USDA’s Food Safety and Inspection Service is using IT to support methods of inspection to better protect the public from foodborne illness. EPA has five systems defined as supporting technologies—two that could potentially support surveillance activities on the safety of drinking water and three modeling and simulation tools that are used to simulate the dispersions of contaminants in water and indoor air. VA has one information system that was developed for surveillance within its health care facilities. In planning or operating each of these information systems and IT initiatives, the extent of coordination or interaction among the lead agency and other related government agencies covered a wide range. Such coordination ranged from a lack of contact with other agencies, to awareness, to formal coordination, to joint development of initiatives. According to CDC officials, while collaboration has improved, there are still organizational difficulties related to combining resources from multiple sources to meet common goals. It is typical for staff or contractual resources funded through one mechanism to be kept separate from those funded through another mechanism. Agencies reported that about 30 percent of systems and initiatives are being either formally coordinated or jointly developed with another agency. Of the six agencies in our review, CDC and DOE’s national laboratories accounted for the majority of information systems and IT initiatives that identified formally coordinated or jointly developed initiatives. One example of a jointly developed information system is FDA’s eLEXNET system. It is a secure Web-based database for sharing laboratory data on food safety among FDA, USDA, DOD, state agriculture, and state and local health laboratories. FDA also shares data with other HHS operating divisions, as well as with Customs (now part of DHS) and the Federal Bureau of Investigations (FBI). This joint effort, which is currently in the planning stage, could improve these agencies’ abilities to address foodborne illnesses. In addition, CDC has several IT initiatives in coordination with state and local public health agencies. To support the compatibility, interoperability, and security of federal agencies’ many planned and operational IT systems, the identification and implementation of data, communications, and security standards for health care delivery and public health are essential. Although federal efforts are now under way to strengthen and increase the use of these standards, the identification and implementation of these standards remain incomplete. Several implementation challenges remain, including coordination of the various efforts to ensure consensus on standards, and establishment of milestones. Until these challenges are addressed, federal agencies cannot ensure their systems’ abilities to exchange data with other systems when needed. A major consequence of not implementing such standards is the promulgation of piecemeal systems, which results in disparate systems that cannot exchange data. An underlying challenge for establishing and implementing standards is that no overall strategy guides IT development and initiatives. IT standards, including data standards, enable the interoperability and portability of systems within and across organizations. As we have reported in the past, many different standards are required to develop interoperable health information systems, which reflect the complex nature of health care delivery in the United States. Vocabulary standards, which provide common definitions and codes for medical terms and determine how information will be documented for diagnoses and procedures, are one type of data standard. Vocabulary standards are intended to lead to consistent descriptions of a patient’s medical condition by all practitioners. The use of common terminology helps in the clinical care delivery process, enables consistent data analysis from organization to organization, and facilitates transmission of information. Without such standards, the terms used to describe the same diagnoses and procedures sometimes vary. For example, the condition known as hepatitis may also be described as a liver inflammation. The use of different terms to indicate the same condition or treatment complicates retrieval and reduces the reliability and consistency of data. In addition to vocabulary standards, messaging standards are also important because they provide for the uniform and predictable electronic exchange of data by establishing the order and sequence of data during transmission. Medical messaging standards dictate the segments in a specific medical transmission. For example, they might require the first segment to include the patient’ s name, hospital number, and birth date. A series of subsequent segments might transmit the results of a complete blood count, one result (e.g., iron content) per segment. Messaging standards can be adopted to enable intelligible communication between organizations via the Internet or some other communications pathway. Without these standards, the interoperability of federal agencies’ systems may be limited and may limit the exchange of data that are available for information sharing. In addition to vocabulary and messaging standards, there is also the need for a high degree of security and confidentiality to protect medical information from unauthorized disclosure. More detail on these and other key standards is provided in appendix XI. The need for health care data standards has been recognized for a number of years and progress has been made in defining these standards. Yet, despite these efforts, the identification and implementation of these standards remains incomplete. CDC acknowledged the need for standards specific to public health systems, and in 1995 it established the National Electronic Disease Surveillance System (NEDSS) initiative to address the limitations of current surveillance systems. These limitations included (1) the multiplicity of program-specific information systems, (2) incomplete and untimely data, (3) the unacceptable burden on health care system respondents, (4) the overwhelming volume of data to be managed by state and local health departments, and (5) the lack of state-of-the-art IT. As part of the NEDSS initiative, CDC, in collaboration with others, agreed to encourage the use of data, communications, and security standards that are required for building interoperable public health systems. CDC expects that the implementation of NEDSS will improve the reporting of disease outbreaks from the states by increasing the timeliness, accuracy, and completeness of data. According to CDC, once fully implemented, these standards are to provide the ability to merge data from laboratories with epidemiological data, in addition to providing the ability to obtain information on cross-jurisdictional outbreaks. In August 1996, Congress also recognized the need for standards to improve the Medicare and Medicaid programs in particular and the efficiency and effectiveness of the health care system in general. It passed the Health Insurance Portability and Accountability Act of 1996 (HIPAA), which calls for the industry to control the distribution and exchange of health care data and begin to adopt electronic data exchange standards to uniformly and securely exchange patient information. According to the National Committee on Vital and Health Statistics (NCVHS), significant progress has occurred on several HIPAA standards, however, the full economic benefits of administrative simplification will be realized only when all of the standards are in place. In July 2000, the NCVHS again reported on the need for standards, this time highlighting the need for uniform standards for patient medical record information. They found that major impediments to electronic exchange of patient medical information were the limited interoperability of health information systems; the limited comparability of data exchanged among providers; and the need for better data quality, accountability, and integrity. In November 2001, NCVHS issued another report outlining a strategy, which includes developing and using standards. According to NCVHS, the public health infrastructure could be strengthened through more rapid identification and implementation of existing standards and other new standards. The Institute of Medicine (IOM) and others are also reporting on the lack of national standards for the coding and classification of clinical and other health care data, and for the secure transmission and sharing of such data. Complementary to the work of NEDSS on identifying standards for public health systems, in 2001 the Office of Management and Budget created the Consolidated Health Informatics (CHI) initiative as one of its e- government projects to facilitate the adoption of data standards, among others, for health care systems within the federal government. The CHI initiative is an interagency work group led by HHS and composed of representatives from DOD, VA, and other agencies. Recognizing the need for standards to be incorporated across federal health care systems, HHS, DOD, and VA recently announced its first set of standards (e.g., HL7, LOINC) for the electronic exchange of health information to be implemented across the federal government. Once federal agencies adopt the recommended standards, they are expected to include the standards in their architectures and to build systems accordingly. This commitment is to apply to all new systems acquisition and development projects. The CHI initiative plans to announce additional standards for federal systems as the working group agrees upon them, but does not have time frames established for making these announcements. Despite progress in defining health care IT standards, several implementation challenges—such as coordination of the various initiatives to achieve consensus on the use of standards, establishment of milestones, and development of implementation mechanisms—remain to be worked out. Currently, there are no activities or mechanisms defined to ensure coordination and consensus between these initiatives at the national level. HHS officials agree that leadership and direction are still needed to coordinate the various standards-setting initiatives and to ensure consistent implementation of standards for health care delivery and public health. Coordination of these initiatives is essential to ensure that the completion of standards development is accelerated and that consensus is obtained from all stakeholders. According to NCVHS, the process of developing health care data standards involves many diverse entities, such as individual and group practices, software developers, domain-specific professional associations, and allied health services. This fragmentation has slowed the dissemination and adoption of standards by making it difficult to convene all of the relevant stakeholders and subject matter experts in standards development meetings and to reach consensus within a reasonable period of time. Another challenge is that not all of the federal government’s standards- setting initiatives have milestones associated with efforts to define and implement standards. For example, while the CHI initiative—the primary federal initiative to establish standards—has announced such initial standards and implementation requirements for health care information exchange, it has not yet established milestones for future announcements. Accordingly, it is not clear when these announcements will occur. Another challenge is that there is no mechanism to monitor the implementation of standards throughout the health care industry. In November 2001, NCVHS reported a need for a mechanism, such as compliance testing, to ensure that health care standards are uniformly adopted as part of a national strategy. NCVHS added that without an implementation mechanism and leadership at the national level, problems associated with systems’ incompatibility and lack of interoperability will persist throughout the different levels of government and the private sector and, consequently, throughout the health care sector. Since that time, however, no national monitoring mechanism has yet been established. A major consequence of not implementing such standards is the promulgation of piecemeal systems, which result in disparate systems that cannot exchange data. This leads to information gaps, hindering the prompt and accurate identification of emerging biological threats— consequently, timely detection of major public health threats is limited. For example, according to CDC officials, one of the IT challenges encountered by public health officials responding to the anthrax events of October 2001 was the issue of exchanging data among the many participants involved in the response—clinical sites, local health departments, emergency responders, state health departments, public health laboratories, and federal agencies. During this event, participants accumulated dissimilar data and principally exchanged it manually. An underlying challenge for establishing and implementing such standards is that no overall strategy guides IT development and initiatives. With no overall strategy that addresses the development and implementation of standards and associated milestones, federal agencies cannot ensure their systems’ abilities to exchange data with other systems when needed and cannot ensure effective preparation for and response to bioterrorism and other public health emergencies. Within the public health sector, the implementation of emerging information technologies could help to strengthen agencies’ technological capabilities to support the nation’s ability to prepare for and respond to bioterrorism and other public health emergencies. Agencies identified several activities to research, develop, and implement emerging technologies, which were generally initiated to meet agencies’ specific needs. However, barriers exist that may hinder the public health community from benefiting from the implementation of emerging information technologies. An emerging technology is one in which research has progressed far enough to indicate a high probability of technical success for new products and applications that might have substantial markets within approximately 10 years. Agencies identified several IT applications that incorporate the use of emerging technologies. They include commercial IT and communications solutions, along with IT that was developed specifically for the health care sector. Examples of emerging information technologies for use in public health applications include the following: Geographic information system (GIS): GIS is being used by federal agencies to support disease and outbreak surveillance. CDC uses GIS to track the spread of infection through a community, to identify geographic areas of particular health concern, and to identify susceptible populations. The resulting information can be used in support of surveillance systems to help identify spatial clustering of abnormal events as the data are collected. GIS was used in 2001 to map data related to CDC’s emergency response to the anthrax bioterrorism event, and it was used in 2002 to aid the FBI’s investigation of the anthrax attack in Florida. FDA is currently using GIS technology in its food safety system, eLEXNET. Web-based images for diagnosis: Several of CDC’s systems use the Internet to enhance reporting and communications capabilities. For example, its DPDx system uses the Internet to strengthen the capabilities of laboratories to diagnose parasitic diseases. The function also enables users to obtain diagnostic assistance over the Internet by allowing laboratories to transmit images to CDC and obtain answers to inquiries, sometimes within minutes. The system increases the interaction between CDC and public health laboratories. Data mining: DOD’s ESSENCE system uses data mining technology to support early detection of infectious disease outbreaks or bioterrorism events. This system enhances public health officials’ decision-making capabilities regarding events, which may be public health emergencies. Grid computing: DOD’s Army Medical Research Institute of Infectious Diseases is sponsoring a project with the support of several partner organizations to use grid-computing techniques to help find a treatment for smallpox after infection. The system will run simulated tests of molecules representing some 35 million potential drugs to see how they interact with the smallpox virus. Computer-aided DNA signature development: DOE’s Lawrence Livermore National Laboratory is developing software called KPATH, which is a computer-aided DNA signature development tool. It analyzes pathogen DNA to identify unique signatures. Once identified, these signatures can be used to assist in the process of detecting biological incidents. The results of such development efforts support an enhanced capacity for rapid identification of biological agents. Virtual private network (VPN): DOE’s Los Alamos National Laboratory is working on an Internet-based system called the Forensics Internet Research Exchange, which supports the sharing of biothreat information among research and government agencies. This system is secured through the use of a VPN. A VPN is a communication system that uses public networks to securely transport private intraorganizational and interorganizational information. While industry use of VPNs is common, only four of the systems included in our inventory use VPNs for public health-specific applications. Public key infrastructure (PKI): CDC has begun using PKI for secure communications between public health officials using NEDSS. PKI is a system of hardware, software, policies, and people that, when fully implemented, can provide a suite of information security assurances that are important in protecting sensitive communications and transactions. Portable biological detection unit: DOE’s Sandia National Laboratory has made progress toward developing a small sampling and analysis instrument that is portable and does not require a chemist’s expertise to operate. This system, µChemLab, is the first that reduces the size of large instruments to the extent that they can be taken into the field and used by first responders, such as firefighters. The device utilizes embedded software algorithms that indicate the level of threat present in the environment in which the instrument is deployed. While the public health community may benefit by implementing emerging information technologies, several factors introduce barriers and risks to their successful implementation. One barrier is that emerging technologies likely have not been in use long enough for the developers to identify all areas for standardization, or for the technologies to have evolved to the point that they are interoperable with other already-existing technologies within public health. Another barrier, according to Gartner, Inc., a leading private research firm, is that the use of emerging information technologies may likely change an organization’s existing business model. Therefore their implementation may introduce a significant level of risk. For these reasons, the introduction of an emerging information technology may be disruptive to existing business processes. A third possible barrier is the lack of a clearly defined mechanism for continuing research and development for emerging technologies once the results are turned over to the public health sector. For example, according to a CDC official, there is no mechanism to develop demonstration projects to identify and prove the usefulness and applicability of emerging technologies within the public health sector at the federal, state, and local levels. At the time of our review, funds for two research and development efforts that were initially identified as promising were discontinued without consideration of the project’s value to the public health infrastructure. Lastly, we observed that activities related to the use of emerging technologies are often the result of independent efforts for specific purposes. Consequently agencies may not be able to share successes or lessons learned. Effectively addressing each of these barriers will be essential if the health care industry is to take full advantage of emerging information technologies. As concerns about the possibility of bioterrorism have been elevated, federal, state, and local public health agencies have been increasing efforts to prepare for and respond to public health emergencies. Federal agencies identified over 70 existing information systems, supporting technologies, and IT initiatives that may better support the public health infrastructure. The extent of coordination or interaction among the lead agency and other related government agencies ranged from a lack of coordination, to awareness, to formal coordination, to jointly developed initiatives. As these and future systems are pursued, leadership will be essential to set priorities for information systems, supporting technologies, and other IT initiatives to enhance the effective preparation for and response to bioterrorism and other public health emergencies. Although a number of efforts are under way, no comprehensive set of standards has been implemented sufficiently to fully support the public health infrastructure. Leadership and an overall IT strategy are important for ensuring that standards development organizations and federal agencies address remaining implementation challenges: (1) coordination of the various efforts and consensus on the use of standards, (2) establishment of milestones for defining and implementing standards, and (3) mechanisms for monitoring implementation of standards. Without a strategy to ensure coordinated efforts and consistent application of standards, federal agencies cannot ensure that their systems are compatible or interoperable and, therefore, cannot effectively support actions to manage public health emergencies through the timely and accurate exchange of information. Finally, federal agencies have begun to implement emerging technologies to strengthen the public health infrastructure. While some emerging technologies have been implemented, and others are being researched and developed, agencies cannot take full advantage of these technologies because several barriers exist. Effectively addressing each of these barriers will be essential if the health care industry is to fully leverage these emerging information technologies. Leadership will be essential to address these barriers and also to establish mechanisms for identifying and prioritizing uses of emerging technologies to better support the nation’s ability to prepare for and respond to public health emergencies. We recommend that the Secretary of Health and Human Services, in coordination with other key stakeholders—such as the Secretaries of Defense, Homeland Security, and Veterans Affairs—establish a national IT strategy for public health preparedness and response. This IT strategy should identify steps toward improving the nation’s ability to use IT in support of the public health infrastructure. More specifically, it should identify all federal agencies’ IT initiatives, using the results of our inventory as a starting point; set priorities for information systems, supporting technologies, and define activities for ensuring that the various standards-setting organizations coordinate their efforts and reach further consensus on the definition and use of standards; establish milestones for defining and implementing all standards; create a mechanism—consistent with HIPAA requirements—to monitor the implementation of standards throughout the health care industry; and address existing barriers and establish mechanisms for identifying and prioritizing uses of emerging technologies that are appropriate for ensuring continued improvements to the nation’s ability to prepare for and respond to public health emergencies. We received written comments on a draft of this report from the Deputy Assistant Secretary of Defense for Chemical/Biological Defense at DOD, Acting Associate Administrator for Management and Administration at DOE, the Acting Principal Deputy Inspector General at HHS, and the Secretary of Veterans Affairs. These four agencies generally concurred with our results, but they did not comment specifically on the recommendations. They provided technical comments, which we have incorporated in this report as appropriate. USDA and EPA concurred with our results in their oral comments, which were primarily technical comments and incorporated as appropriate. Technical comments were generally limited to additional information or correction of information on the description of their systems included in the appendixes. While DHS was not included as one of the agencies in our review because they did not exist until the end of this engagement, we provided DHS officials with the opportunity to comment on the draft of this report, which they declined. Written comments from DOD, DOE, HHS, and VA are reproduced in appendixes XII to XV. Among its comments, HHS officials stated that the focus of this report on IT overemphasized its role and does not address other components of the public health infrastructure. As we describe in the background section of the report, IT is a tool that enables personnel to fulfill their mission. We recognize that the United States health care and public health infrastructure is a complex network of people, systems, and organizations, with participation at all levels—federal, state, tribal, local, international, and the private sector. We also recognize that there are other important issues about the public health infrastructure that merit attention, such as workforce capacity and training, capacity of the public health laboratories, variation in state public health laws, capacity of the health care delivery systems, and communication strategies for addressing the public. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date on the report. At that time, we will send copies of the report to other congressional committees. We will also send copies of this report to the Secretaries of Agriculture, Defense, Energy, Health and Human Services, Homeland Security, and Veterans Affairs, and to the Administrator of the Environmental Protection Agency. Copies will also be made available at no charge on our Web site at www.gao.gov. If you have any questions on matters discussed in this report, please contact me at (202) 512-9286 or M. Yvonne Sanchez, Assistant Director, at (202) 512-6274. We can also be reached by E-mail at pownerd@gao.gov and sanchezm@gao.gov, respectively. Other contacts and key contributors to this report are listed in appendix XVI. The objectives of our review were to compile an inventory of current and planned bioterrorism information technology (IT) initiatives at selected federal agencies and identify the range of coordination efforts, identify and describe the development and use of health care IT standards for bioterrorism-related systems, and review the potential use of emerging information technologies for bioterrorism preparedness and response. To address these objectives, we conducted our audit work at six selected federal agencies—United States Department of Agriculture (USDA), Department of Defense (DOD), Department of Energy (DOE), Department of Health and Human Services (HHS), Department of Veterans Affairs (VA), and the Environmental Protection Agency (EPA)—that we previously reported were involved with supporting public health and bioterrorism preparedness and response, which included the use of IT. We excluded federal agencies that are responsible only for law enforcement and consequence management related to other types of terrorism. To compile the inventory of current and planned IT initiatives related to bioterrorism, we met with agency officials and identified the categories of systems (e.g., detection, surveillance, diagnostic and clinical management, communications, and supporting technologies) to be included in the inventory and the data to be collected about each system. The inventory includes information systems with applications related to both public health and bioterrorism, since most systems were developed for routine public health purposes but are potentially useful during a bioterrorism event. We also created a database for collecting and analyzing the data from the selected agencies. Next we collected and compiled the inventory data and validated the consistency of the data with each agency. We also included systems that were not necessarily designed for public health purposes, but might be adapted for that function. We included other technologies, such as detection devices that include an IT component that facilitates the collection of data for surveillance systems or otherwise enable IT to perform diagnosis, management, prevention, surveillance, reporting, and communication functions. Our inventory includes information systems that support detection, surveillance, diagnostic and clinical management, communications, and supporting technologies. The inventory specifically excludes the following types of IT: law enforcement and intelligence systems, military systems with no applicability to civilian populations (e.g., distance learning and other training systems, disease-specific surveillance systems with no potential to support bioterrorism preparedness and response, systems designed to track agricultural terrorism, and consequence management systems for traditional first responders (e.g., police and firefighters). We met with and obtained documentation from representatives of several nonprofit, research, and public health professional organizations, such as the RAND Corporation, the University of California at San Francisco- Stanford Evidence-based Practice Center, and the National Association of County and City Health Officials. Based on our research and the information provided by those parties, we identified categories of IT that support public health and bioterrorism preparedness and response. To illustrate the role of different categories of IT, we also collected more detailed information about selected systems efforts. During our discussions with agency officials about the results of their inventory data, we asked about an agency’s interaction and involvement with information systems and IT initiatives being led by other federal agencies. We also collected data as part of the systems inventory about jointly developed projects that included a partner outside their agency. To identify and describe the development, use, and progress of health care data, communications, and security standards, we identified ongoing federal efforts and public/private collaborations to implement standards for IT systems that could be used to support the public health infrastructure. In addition, we met with HHS officials to discuss ongoing activities and progress being made to implement the National Committee on Vital and Health Statistics’ recommendations on the National Health Information Infrastructure and other standards-related initiatives. We also met with other experts from the Centers for Disease Control and Prevention and Stanford University and discussed with them the use and applicability of health care standards within the public health infrastructure. To review the potential use of emerging information technologies for bioterrorism preparedness and response, we used research from the Department of Commerce and private-sector consultants to define the term “emerging technologies” as it pertains to information technology. During discussions with agency officials, we asked about their uses and experiences with emerging information technologies, as well as barriers to their implementation. Then, we reviewed the selected agencies’ use of and plans for applications specific to public health that were included in the systems inventory. According to CDC, the United States public health system and primary health care providers must be prepared to address various biological agents, including pathogens that are rarely seen in the United States. CDC defines three categories of biological diseases or agents based upon the public health impact and the level of risk to the nation’s security that the transmission of these agents may introduce. The categories and the associated agents are described below: Category A Diseases/Agents: High-priority agents include organisms that pose a risk to national security because they can be easily disseminated or transmitted from person to person, result in high mortality rates and have the potential for major public health impact, might cause public panic and social disruption, and require special action for public health preparedness. Anthrax (Bacillus anthracis) Botulism (Clostridium botulinum toxin) Plague (Yersinia pestis) Smallpox (Variola major) Tularemia (Francisella tularensis) Viral hemorrhagic fevers (filoviruses and arenaviruses ) Category B Diseases/Agents: Second highet priority agents include those that are moderately easy to disseminate, result in moderate morbidity rates and low mortality rates, and require specific enhancements of CDC’s diagnostic capacity and enhanced disease surveillance. Brucellosis (Brucella species) Epsilon toxin of Clostridium perfringens Food safety threats (e.g., Salmonella species, Escherichia coli O157:H7, Shigella) Glanders (Burkholderia mallei) Melioidosis (Burkholderia pseudomallei) Psittacosis (Chlamydia psittaci) Q fever (Coxiella burnetii) Ricin toxin from Ricinus communis (castor beans) Typhus fever (Rickettsia prowazekii) Viral encephalitis (alphaviruses [e.g., Venezuelan equine encephalitis, eastern equine encephalitis, western equine encephalitis]) Water safety threats (e.g., Vibrio cholerae, Cryptosporidium parvum) Category C Diseases/Agents: Third highest priority agents include emerging pathogens that could be engineered for mass dissemination in the future because of availability, ease of production and dissemination, and potential for high morbidity and mortality rates and major health impact. In addition to the phases of an event (i.e., prevention and preparedness, event recognition, early and sustained response, and recovery) there are corresponding categories of IT, which play a vital role as the event progresses. These categories of IT serve different but related functions. For the purposes of this report, we categorized systems according to their primary purposes, as defined in a technology assessment for the Agency for Healthcare Research and Quality that was completed by the University of California San Francisco-Stanford Evidence-based Practice Center. While not all detectors include IT components, detection systems collect and identify potential biological agents in environmental samples, regardless of whether anyone has been exposed to a harmful level of a contaminant. Components of a detection system can include collection systems, particulate counters or biomass indicators, rapid identification systems, and integrated collection and identification systems. In general, detection systems have three parts: (1) a sampler or collector to concentrate the aerosol and preserve samples for further analysis, (2) a trigger component (often a particulate counter or a biomass indicator) that can identify the presence of a potentially harmful biological agent, and (3) an identifier to provide specific identification of the biological agent. Biological detection technologies are in a much less mature stage of development than chemical detectors. According to a February 2001 report by the North American Technology and Industrial Base Organization (NATIBO), no single sensor detects or identifies all biological agents of interest. Several different technologies may be needed as components of a layered detection network. It is difficult to distinguish specific biological agents from naturally occurring background materials. Real-time detection and measurement of biological agents in the environment is challenging because of the number of potential agents to be identified, the complex nature of the agents themselves, the countless number of similar micro-organisms that are a constant presence in the environment, and the minute quantities of pathogen that can initiate infection. Most available systems are point detection systems that are either in the field-testing stage or still in the laboratory. The NATIBO assessment also reported that current systems for detecting biological agents are large, complex, expensive, and subject to false results. The 10 detection systems identified in the inventory include IT components. These systems make use of IT to record and send data to a network. Table 2 shows systems included in the inventory that were developed and operated by DOE and DOD for use in both military and civilian settings. One example of a detection system is the Biological Aerosol Sentry and Information System (BASIS). This is a portable system of networked air sampling units that is capable of detecting airborne biological incidents at large gatherings such as political conventions and major indoor and outdoor sporting events. In the mid-1990s, DOE’s national laboratories began work to detect and prevent bioterrorism under the Chemical- Biological National Security Program. As part of that work, Lawrence Livermore and Los Alamos laboratories developed BASIS, which has been used during the Olympics and other events to collect air samples and provide information on the time, duration, amount, and types of biological releases. It uses barcodes to maintain data that link samples to filters taken from specific sampling units. These data are analyzed at field laboratories and tracked with BASIS. If a biological agent is detected, it will provide information about the type of agent as well as where and when it was collected. BASIS also estimates exposure levels and durations to assist public health officials in identifying the population that requires treatment. It was adapted to process samples from the BioWatch program beginning in February 2003. Surveillance is the ongoing collection, analysis, and interpretation of disease-related data to plan, implement, and evaluate public health actions. Surveillance systems differ from detection systems in that they monitor the actual incidence of disease or illness. Without an adequate surveillance system, officials cannot know the true scope of existing health problems and may not recognize new diseases until many people have been affected. The surveillance network relies on the participation of health care providers, laboratories, state and local health departments, and other nontraditional data sources across the nation. Surveillance systems monitor and track abnormal situations that require epidemiological actions and that direct preventive measures by guiding resource allocation and assessing interventions. The most important aspect of a surveillance system is its ability to detect an outbreak at a stage when intervention may affect the expected course of events. It is the public health officials’ most important tool for detecting and monitoring both existing and emerging infectious diseases. Surveillance activities may be either active or passive. Passive surveillance relies on physicians, laboratory and hospital staff, and others to take the initiative in reporting data to health departments. Passive systems may be inadequate to identify a rapidly spreading outbreak in its earliest and most manageable stage because there is a chronic history of underreporting and a time lag between diagnosis of a condition and the health department’s receipt of a report. Active surveillance relies on public health officials to take the initiative to periodically contact laboratory officials to gather data. Active surveillance produces more complete information than passive, but is more costly to use for data collection activities. Timely and reliable data are essential components of public health assessment, policy development, and assurance at all levels of government; however, the current capacity of public health surveillance is weakened by gaps and fragmentation. Fragmentation has developed in surveillance systems in part because states and localities have not developed uniform data collection procedures, storage, and transmission. In February 1999, we reported on gaps in the nation’s public health surveillance network for important emerging infectious diseases; and we recommended that CDC, in collaboration with state, local, and other public health officials, reach consensus on the core capabilities needed at each level of government, including IT capabilities. Another key factor shaping the development of surveillance systems is that, historically, investment in these systems has been targeted to specific programs (e.g., tuberculosis, sexually transmitted diseases, etc.), resulting in a patchwork of surveillance efforts across the spectrum of infectious disease threats and other programs. Most surveillance systems are identified by the type of data they collect; there are eight categories of surveillance: 1. Foodborne illness surveillance—systems that collect, process, and disseminate information on foodborne pathogens or illness. In September 2001, we reported weaknesses in several of CDC’s surveillance systems for foodborne illness; we reported that these systems had limited usefulness because there were gaps in the data and because CDC did not release the data in a timely manner. 2. Hospital-based surveillance—systems that collect data on hospital- acquired infections for hospital infection control officers. Their primary purpose is to track hospital acquired infections, not to identify undiagnosed infections from the community. However, hospital-based surveillance systems could play two roles in the early detection of emerging infections: the identification of a cluster of recently admitted patients, which might suggest a community-based outbreak, and the identification of a cluster of cases within the hospital that may suggest inpatients with an unrecognized communicable disease. 3. Influenza surveillance—systems that collect data on influenza-like illness. These systems are relevant to bioterrorism surveillance because many bioterrorism-related illnesses present with flu-like symptoms. Influenza surveillance could also serve as a model because these systems integrate clinical and laboratory data for the detection of influenza outbreaks and are coordinated global efforts; they fulfill needs similar to those of surveillance for bioterrorism. 4. Laboratory and antimicrobial resistance surveillance—systems that facilitate the collection, analysis, and reporting of notifiable pathogens and of antimicrobial resistance data that could potentially facilitate the rapid detection of a biological agent. Laboratory surveillance systems are an essential component of any system for the detection of a covert bioterrorism event, both for the detection of uncommon organisms (e.g., smallpox, anthrax, and Ebola) and common organisms with unusual patterns of antimicrobial resistance. 5. Network of clinical reports—systems that collect and analyze clinical reports from individual clinicians and sentinel networks. The growth of such networks has generated a demand for information systems capable of automating data collection, analysis, reporting, and communication. 6. Syndromal surveillance—systems that collect data on the earliest signs and symptoms caused by most biological agents. Therefore, patients with these syndromes are the targets of syndromal surveillance programs. These systems are still considered experimental, and there is no widely accepted definition for any of these syndromes. As a result, syndromal surveillance systems are widely heterogeneous with respect to the syndromes under surveillance and how each syndrome is defined. 7. Zoonotic and animal disease surveillance—systems that collect, process, and disseminate information on zoonotic and animal diseases. There are concerns that a bioterrorist attack could involve the dissemination of a zoonotic illness among animal populations with the intention of infecting humans or livestock and causing economic and political/economic chaos. Early detection of such an event requires effective rapid detection systems for use by farm workers, meat inspectors, and veterinarians, with real-time reporting capabilities to public health officials. 8. Other—-systems that collect sufficiently different surveillance data that they do not fit into the described categories. These systems could be valuable additions to surveillance networks that integrate data from clinicians, hospitals, and laboratories. Our inventory identifies 34 surveillance systems, which monitor and track specific categories of illness and disease. Some of CDC’s surveillance systems have been used for several years and only consist of a database, while others, such as NEDSS, are more comprehensive. As table 3 indicates, 4 systems are in development, 2 are currently being evaluated as pilots, 1 is being planned, and 27 are operational. One example of a surveillance system is DOD’s Electronic Surveillance System for the Early Notification of Community-based Epidemics (ESSENCE). ESSENCE was developed to support early identification of infectious disease outbreaks in the military, and to provide epidemiological tools for improved investigation. ESSENCE uses ambulatory data that are collected from its military hospitals and clinics and transmitted daily to a central database. By comparing the daily analyses to historical trends, it can identify patterns that suggest an infectious disease outbreak. ESSENCE uses geo-spatial data to cluster syndromic groupings based on the locations of occurrences. By getting daily reports and automatic alerts, epidemiologists can track, in near real- time, the syndromes that are being reported in a given region. It incorporates privacy algorithms and supports agent-based response using artificial intelligence software, reasoning, data mining, and visualization tools. DOD’s use of electronic medical records enhances its ability to quickly collect data for syndromic surveillance. In the future, the department plans to find, analyze, and add new data sources to the system. For the purposes of this report, we defined these as systems with potential utility for enhancing the likelihood that clinicians consider the possibility of bioterrorism-related illness and treat patients accordingly. Diagnostic systems are generally designed to assist clinicians in developing a differential diagnosis for a patient who has an unusual clinical presentation and consist of three different types: general diagnostic decision support systems (DSS), radiology interpretation systems, and natural language processing techniques. General diagnostic DSS are those designed to assist clinicians in developing a specific diagnosis for a patient who has unusual signs and symptoms. For these systems to be useful in the event of a covert bioterrorist attack, they should prompt clinicians to consider the possibility of bioterrorism-related illness as a potential cause of the symptoms, thereby increasing the probability that the clinician will perform appropriate diagnostic testing. In addition, since many biothreat agents can cause pulmonary disease, x- rays or other radiological tests would be a common diagnostic procedure performed on patients who might benefit from either the use of radiology interpretation systems that can increase the diagnostic accuracy of radiology reports, or the use of natural language processing techniques to automate the identification of disease concepts in the free text found in diagnostic reports. Clinical management systems can also make recommendations to clinicians by abstracting clinical information from electronic medical records, applying a set of rules, and generating patient-specific management and prevention recommendations. In general, these systems are limited to institutions with electronic medical records and robust medical informatics programs. There are no known systems specifically designed to provide recommendations to clinicians or public health officials for management of a bioterrorism event. Of the systems that are known to exist, they provide recommendations at the point of care, typically when the clinician enters the electronic medical record of the patient in question. These diagnostic and clinical management systems are similar in that they both use clinical information about a patient, apply information from a knowledge base, and generate a list of possible diagnoses or a list of management recommendations. Based on this similarity, we have included them in the same category of IT. Of the federal agencies included in our review that utilize other diagnostic and clinical management systems for their health care delivery operations—DOD, VA, and HHS’s Indian Health Services—none has implemented these particular applications as defined above. The purpose of communications and reporting systems is to facilitate the secure and timely delivery of information in the midst of a public health emergency to the relevant responders and decision makers, so that appropriate action can be undertaken. During a public health emergency, clinicians must be able to communicate rapidly with their patients; public health officials must be able to communicate with other local, state, and federal officials, and laboratories must be able to communicate diagnostic test results. Robust security measures that ensure patient confidentiality and resist cyber attacks are also a necessary component of any health- related communication system. Our systems inventory contains 10 communications systems. While communications within the public health community still depend largely on telephone- and paper-based systems, they are moving to Web-based and electronic data transmission. CDC is responsible for many of the communications systems under development in HHS; however, some of the systems are not yet fully implemented at the state or local levels, and this could negatively affect communication of health information to the public. As table 4 shows, all 10 of these systems are operational. The Health Alert Network (HAN) is one example of a nationwide communications system that is currently being developed by CDC. HAN is to serve as a platform for (1) distribution of health alerts, (2) dissemination of prevention guidelines and other information, (3) distance learning, (4) national disease surveillance, (5) electronic laboratory reporting, and (6) communication of bioterrorism-related initiatives to strengthen preparedness at the local and state levels. HAN is intended to strengthen the capacity of state and local health departments by serving as an early warning and response system for bioterrorism and other health events. HAN provides the capacity to send urgent health alerts to local agencies via broadcast technologies, such as fax services and autodialing. HHS has awarded grants to all 50 states, 3 large cities, 3 counties, 8 territories, and the District of Columbia for HAN implementation. When completed, HAN is to provide high-speed, secure Internet connections for local health officials; on-line, Internet- and satellite-based distance learning systems; and early warning broadcast alert systems. HAN currently provides secure Internet access to two-thirds of the nation’s counties, and at least 13 states have high-speed Internet access to all of their counties. State and local governments may also use CDC funding to expand HAN to community partners such as health organizations and major hospital networks. In addition to enhancing state and local communications, at the time of our review, CDC had provided grants to three local centers for public health preparedness. The centers are considered models of integrated communications and information systems across multiple sectors, advanced operational readiness assessment, and comprehensive training and evaluation. New York’s Monroe County Center uses its own health alert network to link hospitals, insurers, and county health care agencies to doctors, pharmacies, and clinics for emergency and routine communications. Monroe County also developed a unified platform for the community to view and track the status of their emergency departments and the number of available beds for a specialty unit within a hospital. In addition to working on syndromic surveillance, Colorado’s Denver County Center has developed a bi-directional alert communication and notification system for its public health partners and has explored the use of redundant response system tools for rapidly notifying key local public health partners in the event that traditional phone service is lost. Supporting technologies are tools or systems that provide information for the other categories of systems (e.g., detection, surveillance, etc.). During our discussions with federal officials, we found that many projects still in applied research and development are intended to support a particular component associated with a type of system, such as detection devices. These projects offer promising techniques that are not currently in use. For example, DOE’s national laboratories conduct research into new detection and surveillance techniques that, when developed, may be fully deployed into the public health infrastructure. DOE’s Los Alamos National Laboratory (LANL) is conducting the Enabling Analytical and Modeling Tools for Enhanced Disease Surveillance research project. Its objective is to develop analytical tools to support public health officials in quickly identifying emerging threats so they can respond accordingly. Subsets of this research are incorporated into ongoing projects. The Forensics Internet Research Exchange is another LANL research project that is intended to connect a network of laboratories and government agencies through a secure virtual private network (VPN) so that they can share genetic sequencing data for identifying strains of biological organisms. In addition, the Defense Advanced Research Projects Agency’s Bio-ALIRT program is a research project to further enable early detection of biological events from artificial or natural causes. Its objective is to scientifically determine which nontraditional data sources (e.g., human behavior) are useful in enabling early detection of potential biological attacks. More detailed descriptions of these projects are included in appendixes IV through X. Simulation and computational modeling is another important—and still developing—technology for supporting bioterrorism preparedness and response. With the increase of computational power available in today’s technology, and the increasing availability of data, we may soon be able to predict the course of emerging infectious diseases. LANL is piloting the Bioreactor Simulation Tools project, which models and analyzes biological systems in order to create models for predicting the spread of a biological agent. The DOD Chemical and Biological Defense program’s Joint Effects Model incorporates simulation tools (used to create a hazard prediction model) that are expected to predict environmental effects. Another DOD project, the Joint Operational Effects Federation, is leveraging existing simulation capabilities to support the prediction of chemical and biological effects at various levels of operation. DOD’s simulation tools were developed for military purposes. Our inventory includes 18 systems that are identified as supporting technologies. Twelve of these systems are operational, 3 are in development and 3 are being evaluated as pilots. While they are not included within the scope of our systems inventory, there are other systems that will facilitate health care delivery during an act of bioterrorism or other public health emergency. These systems— such as electronic medical records—were excluded from the scope of this review because they are neither public health systems nor were they primarily developed for biodefense. Both DOD and VA have electronic medical information systems (i.e., Composite Health Care System and Veterans Health Information Systems and Technology Architecture), which enhance their ability to automate the collection of surveillance data for systems such as ESSENCE. Automated medical information systems can play an important role for clinicians during their response to a medical emergency, in documenting the treatment of illness and its outcome, and in collecting and sharing diagnostic test results. Electronic medical records can play a role during routine surveillance by serving as important data sources for public health surveillance. The use of electronic medical records could reduce the burdensome and costly use of paper-based processes, facilitating rapid access to data critical for near real-time public health surveillance. USDA became involved in activities concerning bioterrorism because of the increasing realization that the food supply may become a vehicle for a biological attack against the civilian population. Biological attacks on the health of animals and plants are also important to recognize because there are a number of diseases and toxins harmful to humans that can be spread by animals and plants. USDA’s Homeland Security staff within the Office of the Secretary is responsible for coordinating activities on terrorism across USDA. In addition, three of USDA’s services have been involved in bioterrorism research and preparedness: Agricultural Research Service (ARS), Animal and Plant Health Inspection Service (APHIS), and Food Safety Inspection Service (FSIS). ARS has conducted research to improve onsite rapid detection of biological agents in animals, plants, and food and has improved its detection capacity for diseases and toxins that could affect animals and humans. APHIS has a role in responding to biological agents that are zoonotic (i.e., capable of affecting both animals and humans). APHIS has veterinary epidemiologists to trace the source of animal exposures to diseases. FSIS provides emergency preparedness for foodborne incidents, including bioterrorism. USDA identified 10 information systems and supporting technologies. Global Expeditionary Medical System (GEMS) Type of system: Surveillance GEMS provides an integrated biohazard surveillance system that is capable of maintaining a global watch over Air Force personnel. It incorporates an electronic medical record as a basis for real-time data analysis. GEMS establishes records of medical encounters and rapid identification and notification of clinical events, and it integrates the symptom level surveillance that is critical for early detection of disease outbreaks and illnesses. With ongoing site and regional data review, population-specific analysis picks up disease trends to provide early warning of disease outbreaks or biological attacks. GEMS serves as the foundation for an Air Force- wide, integrated medical surveillance and command and control network. GEMS has four modules: patient encounter, theater occupational, public health deployed, and theater epidemiology. Future plans: Complete infrastructure development. Lightweight Epidemiology Advanced, Detection and Emergency Response System (LEADERS) Type of system: Surveillance LEADERS is expected to improve the ability to identify and confirm covert biological warfare incidents or significant natural disease outbreaks. LEADERS is to be a comprehensive system that supports joint military and civilian medical surveillance initiatives. Future plans: To complete infrastructure development and to attain funding for clinical interface. The next phase will focus on development of medical surveillance algorithms for specified diseases representing the most serious bioterrorism threats. Airbase/Port Detector System (Portal Shield) Type of system: Detection The Portal Shield sensor system was developed to provide early and definitive warning of biological threats for high-value, fixed-site assets, such as air bases and port facilities. Portal Shield can detect and identify up to eight biological warfare agents simultaneously, within 25 minutes. Portal Shield uses a "smart logic" algorithm to help reduce false positives and consumables. The network can operate in a surveillance mode as well as a random or manual sample mode. In addition to the biological detection hardware, each sensor is equipped with its own meteorological station and global positioning system. System is operational Used primarily by military personnel at fixed asset sites (e.g., air bases and port facilities) Biological Integrated Detection System (BIDS) Type of system: Detection BIDS provides early warning and identification capability in response to a large area biological warfare attack. It is a detection suite in a shelter that is mounted on a dedicated vehicle with an independent power supply. Other BIDS elements include collective protection, environmental control, and storage for supplies such as a global positioning system and radios. BIDS was designed to utilize multiple biological detection technologies in a layered, complementary manner to maximize detection and presumptive identification capabilities. BIDS is used for warning and for confirming that a biological attack has occurred. It provides presumptive identification of the biological agent being used and produces a sample for laboratory analysis. Est. FY 2003 IT cost: $0 Future plans: Replacement by JBPDS in fiscal year 2004 and full automation of real-time detection and identification of the full range of biological agents. Early Warning Outbreak and Response System (EWORS) Type of system: Surveillance EWORS aids in the collection of standardized medical data, particularly for making area-specific and regional comparisons for trend analysis of the data in order to target early warning outbreak recognition of infectious diseases. EWORS provides for timely and accurate dissemination of outbreak information, leading to effective intervention measures, including investigative and containment activities. It establishes baseline measures for trend analysis that is used to differentiate outbreak from non-outbreak disease occurrence; employs a syndromic approach in contrast to disease-specific reporting classifications; and disseminates real-time information and key-function data analysis for instant and programmed interpretation. EWORS integrates public health and hospital networks and was designed as a complementary system for conventional surveillance methodologies. Future plans: Establishment of the system in the Americas and continued expansion in Southeast Asia. Electronic Surveillance System for the Early Notification of Community- based Epidemics (ESSENCE) Type of system: Surveillance ESSENCE is used in the early detection of infectious disease outbreaks and it provides epidemiological tools for improved investigation. It collects ambulatory data from hospitals and clinics in a central database on a daily basis. Epidemiologists can track—in near real-time—the syndromes being reported in a region through a daily feed of reported data. ESSENCE uses the daily data downloads, along with traditional epidemiological analyses that using historical data for baseline comparisons and more cutting edge analytic methods such as geographic information system. Analysts have implemented an alerting algorithm methodology to detect localized outbreaks and purely temporal methods for low-level, scattered threats. DOD public health professionals use information from ESSENCE to make crucial decisions about potential health emergencies, based on verified and current data. Est. FY 2003 IT cost: $500,000 Future plans: To improve the interface and find, analyze, and add new data sources. ESSENCE is being upgraded to incorporate the use of nontraditional civilian data sources; it is currently operational in the greater Washington, D.C. area. This expanded capability integrates both military and civilian health data with daily records of pharmacy sales, school absenteeism, and other sources, to allow for early warning of emerging infections. Embedded Common Technical Architecture (ECTA) Type of system: Supporting technology ECTA will provide military personnel with sensor connectivity, analysis, and warning and reporting capability for Joint Service combat platforms, command and control centers, and fixed sites. Est. FY 2003 IT cost: Not available Future plans: ECTA will merge the current capabilities of the Multipurpose Integrated Chemical Agent Alarm and the JWARN system and provide additional data processing, production of reports, and access to specific data to improve the efficiency of limited personnel assets. It will consist of the hardware and software required to provide sensor connectivity and analysis between detectors and service-specific systems. The JWARN-ECTA will transfer data automatically from and to the actual detector and will provide commanders, units, and systems with analyzed data for disseminating warnings down to the lowest level of the battlefield. Joint Biological Point Detection System (JBPDS) Type of system: Detection JBPDS detects, identifies, samples, collects, and communicates the presence of biological warfare agents in order to enhance the survivability of U.S. forces. It consists of complementary trigger, sampler, detector and identification technologies that allow it to rapidly and automatically detect and identify biological threat agents. Its suite of tools will be capable of identifying biological warfare agents in less than 15 minutes. JBPDS is in low-rate initial production and limited procurement through fiscal year 2006. Est. FY 2003 IT cost: $560,000 Future plans: JBPDS is scheduled to begin full production in fiscal year 2007. The next stage will focus on reducing size, weight, and power consumption while increasing system reliability. JBPDS will also identify up to 26 agents simultaneously and will interface with JWARN. Joint Warning and Reporting Network (JWARN) Type of system: Detection/Communication JWARN employs warning technology to collect, analyze, identify, locate, report, and disseminate information related to threats and potentially contaminated areas. It gathers information from detectors and uses this information to compute toxic corridors and attacks and to display near real-time results to onsite commanders. JWARN will be employed in making decisions about warning dissemination down to the lowest level on the battlefield and linked to a global command and control system. Future plans: Fielding of JWARN will begin in fiscal year 2004. Plans include using the full JWARN capability to provide commanders with automatic data from sensors and detectors. Epidemiological Interactive System (EPISYS) Type of system: Surveillance EPISYS is a program that enables rapid assessment of disease trends in order to focus research efforts of epidemiologists. It was developed to integrate Navy inpatient hospitalization data with career history and demographic data to form a single system with a flexible interface. It is capable of detecting and flagging diagnostic categories that show rates in excess of their historical threshold values. This surveillance capability allows for the early detection of increased illness rates so that intervention can be started early. Using EPISYS, users can rapidly answer basic epidemiological questions regarding disease and injury rates. Type of system: Communications EPIWIZ is a research tool that was developed to organize SAMS data for further analysis of shipboard illness and injury data. EPIWIZ is expected to enhance the Navy's medical readiness by converting SAMS medical encounter data into surveillance information. It will provide Navy medical personnel easy access to shipboard sick-call information so they can monitor trends, prevent injuries and diseases, facilitate reporting, and enhance medical outcomes. EPIWIZ allows the user to display SAMS medical encounter data in a spreadsheet format to facilitate data analysis. This improved data analysis results in closing the gap between medical occurrence and preventative intervention. Field Medical Surveillance System (FMSS) Type of system: Surveillance FMSS is designed to help detect emerging health problems that might occur during foreign deployments or conflicts. FMSS can help field staff to determine incidence rates; project short-term trends; profile the characteristics of the affected population by person, time, and place; track the mode of disease transmission; and generate various graphs and reports. Once data are entered for a patient, the input is processed, and compatible diagnoses are presented in order of probability, with biological weapons agents highlighted. FMSS also provides on-line access to medical reference data and an interface to the GIDEON database—a well-known knowledge database designed to help diagnose most of the world's infectious diseases based on the patient’s signs, symptoms, and laboratory findings. Many FMSS features have now transitioned over to the Navy’s Medical Data Surveillance System and to other development projects. Medical Data Surveillance System (MDSS) Type of system: Surveillance MDSS is an interactive Web application for collecting data and identifying changes in rates of naturally occurring injuries and illnesses found within routinely collected clinical data on active duty personnel. It compiles routine reports on disease and non-battle injury rates and generates special reports to assist medical staff to investigate the onset of disease and to evaluate the effectiveness of preventive measures. By applying advanced analytic techniques, MDSS can detect shifts in disease trends and outbreaks with minimal historical information on illness patterns characteristic of the area of interest, thereby making it particularly suitable for theater operations. These techniques also facilitate ad hoc analysis. MDSS is being configured to meet certification requirements so it can be deployed aboard Navy ships. MDSS is being pilot tested in the 18th Medical Command in Korea and in Navy hospitals is Yokosuka, Japan and San Diego, California. Est. FY 2003 IT cost: $1,200,000 Future plans: Continued research and development at an advanced research level and testing in a deployed environment at fixed facilities and operational units. Navy Disease Reporting System (NDRS) Type of system: Communications NDRS provides for expedient and efficient submissions of reportable events. It may also be used to track and report disease and non-battle injuries. Its main purpose is to improve the compliance, timeliness, and reliability of disease reporting. Functions have been included to assist local command with state reporting, prevention programs, and contract tracing. NDRS enables users to determine what diseases are present in a particular country, how many outbreaks have occurred, and what treatments were used. NDRS streamlines reporting and provides ready access to epidemiological data. NDRS data are used to conduct trend analysis and to pool findings with data from other services. Future plans: Integration into the Navy’s database for tracking medical encounters, known as the Shipboard Non-Tactical Automated Data Processing Automated Medical System (SAMS). DOE is developing new capabilities to counter chemical and biological threats. DOE expects the results of its research to be public and possibly lead to the development of commercial products in the domestic market. DOE’s Chemical and Biological National Security Program has conducted research on biological detection, modeling and prediction, and biological foundations to support efforts in advanced detection, attribution, and medical countermeasures. Several of DOE’s national research laboratories (e.g., Lawrence Livermore, Los Alamos, Oak Ridge, and Sandia) have conducted biological and environmental research related to bioterrorism preparedness and response. DOE identified 14 information systems and supporting technologies. Autonomous Pathogen Detection System (APDS) Type of system: Detection APDS is an automated, podium-sized system that monitors the air for all three biological threat agents (bacteria, viruses, and toxins). The system has been developed to protect people in critical or high-traffic facilities and at special events. The system performs continuous aerosol collection, sample preparation, and multiplexed biological tests using advanced immunoassays to detect bacteria, viruses, and toxins. More than ten agents are assayed at once. Current research and development work is incorporating polymerase chain-reaction (PCR) techniques for detecting DNA. Single units can be operated to monitor a local space or a central conduit like an air-supply duct. In a more powerful application, a network of APDS units can be integrated with central command and control to protect larger areas. The APDS units can also be networked and integrated with other sensing and analysis systems to provide multifaceted detection and response capabilities. Est. FY 2003 IT cost: Not available Future plans: APDS will move into redesign and piloting in fiscal year 2004. There will be a significant effort in communications and IT for networked instruments in field-testing and beyond. Biological Aerosol Sentry and Information System (BASIS) Type of system: Detection BASIS is a large-area aerosol pathogen detection system. BASIS will provide early detection of biological incidents for special events, such as large assemblies and major sporting events. Planned for civilian use, it will detect a biological incident within a few hours of attack, early enough to allow public health officials to mount an effective medical response. BASIS was developed in close cooperation with federal, state, and local public health agencies to ensure support for real world operational needs. This system was adapted to process samples from the BioWatch program, beginning in February 2003. Est. FY 2003 IT cost: $350,000 Future plans: BASIS funding ended in fiscal year 2002. The fate of BASIS for fiscal year 2003 was unknown. Given the likelihood of additional armed conflicts, LLNL anticipates seeing BASIS simultaneously deployed at multiple sites, such as cities. Computational Design of Pathogen Detection Assays (KPATH) Type of system: Supporting technology KPATH is an automated system that analyzes pathogen DNA signatures to build and maintain unique polymerase chain reaction (PCR) detection signatures. Signatures are requested by collaborators and are used in BASIS. DNA signatures developed by KPATH are now in use in the BioWatch program. Used primarily by federal agencies (e.g., HHS, USDA, and DOD) Future plans: KPATH will be LLNL’s lead system for PCR diagnostic signature design. LLNL will continue enhancements to KPATH’s DNA signature capabilities and will work on its ability to computationally predict protein signatures. Los Alamos National Laboratory (LANL) Biological Aerosol Sentry and Information System (BASIS) See BASIS under Lawrence Livermore National Laboratory. Type of system: Supporting technology Bioreactor Simulation Tools model and analyze biological systems (i.e., genetic networks, metabolic networks, and signal transduction networks). Future plans: Development of a forward-looking capability to create detailed models for fundamental processes in molecular biology. Bio-Surveillance Analysis Feedback Evaluation and Response (B-SAFER) Type of system: Surveillance B-SAFER is a medical surveillance system using data from emergency departments, clinical laboratories, and nontraditional sources (e.g., RN hotline, drug information calls, ambulance services). B-SAFER recognizes an anomaly, either naturally occurring or caused by human intervention. B-SAFER is compliant with HIPAA and NEDSS. Future plans: To project potential outcomes of an outbreak and the potential benefit of intervention techniques. Type of system: Supporting technology Flow cytometry is used in the detection and identification of pathogens. It is a device comprised of lenses, lasers, computers and other high-tech equipment. They allow researchers to analyze, characterize, and sort thousands of biological cells, chromosomes or molecules in minutes. Future plans: Database and data analysis tool development. Type of system: Supporting technology OpenEMed is a distributed, open architecture, open source system that supports image, audio, and graphical data, creating a virtual patient record. OpenEMed has been used with B-SAFER and New Mexico’s NEDSS integrated data repository. OpenEMed includes standard service components for person lookup and identity management, dictionary queries, a clinical data repository, and HIPAA-compliant access control. This software is available for use by the public. Type of system: Detection This project will develop a point sensor for the detection of pathogens. This biosensor is being developed for the rapid detection of disease markers to aid in early diagnosis and could also be used for environmental and medical surveillance for homeland security. Est. FY 2003 IT cost: $1,800,000 Future plans: This biosensor is being adapted for early diagnosis of common infectious diseases including respiratory viruses and tuberculosis. There is a proposal pending to adapt it to medical surveillance for the Department of Homeland Security. Oak Ridge National Laboratory (ORNL) Type of system: Supporting technology LandScan USA is expected to be a high-resolution population distribution model that will provide timely and more spatially precise population and demographic information to support geographic analyses anywhere in the United States. In addition to its application for emergency planning in case of an attack or natural disaster, it has potential uses for socioenvironmental studies, including exposure and health risk assessment, and urban sprawl estimates. It can support improved development of emergency response plans in case of an attack or natural disaster, homeland security, environmental justice analyses, exposure/risk assessment, and evaluation of risks. The data it provides includes daytime and nighttime population distribution. Type of system: Detection SensorNet is expected to be a comprehensive, national system for managing incidents for real-time detection, identification, and assessment of chemical, biological, radiological, and nuclear threats. It is intended to bring together and coordinate all necessary knowledge and response assets quickly and effectively. SensorNet is to consist of sensor technologies, real-time threat assessment, nationwide coverage, and nationwide real-time remote communications. SensorNet is currently under development as a standards- based architecture with encryption and access controls. Future plans: To continue operational prototypes and refine design for nationwide system. Sandia National Laboratory (SNL) Type of system: Supporting technology Enabling Analytical and Modeling Tools for Enhanced Disease Surveillance are analytical tools to detect unusual events from a natural background. These tools have been tested with influenza, respiratory illnesses, and dengue fever and are expected to be incorporated into ongoing projects. The flexibility of this project allows for tailoring to specific diseases. Est. FY 2003 IT cost: $0 Future plans: Provide a distributed software framework for integrating information from disparate sources; develop and integrate analytical tools for earlier detection of disease outbreaks. Type of system: Detection ISMs are expected to be an intelligent integration of detection systems supporting wireless ad hoc networking. ISMs are intended to be used in support of DOD’s BDI testbed, PROTECT, PROACT, and a project for the Mint. Future plans: ISMs are currently under development; more capable computational components are to be integrated when available. Type of system: Detection µChemLab is a portable, hand-held chemical analysis system, which is fully self-contained and incorporates "lab on a chip" technologies. It is a sensitive device with fast response times in a low-power, compact package used for monitoring facilities. While µChemLab is currently being developed for chemical detection, it can also be used for biological agent detection. Portable, stand- alone devices for the analysis of chemical agents and protein biotoxins have been developed and tested at the research prototype stage. Current research is focused on improving the performance and expanding the capability of these and other such devices. Future plans: Analysis of additional agents. Rapid Syndrome Validation Project (RSVP) Type of system: Surveillance/Communication RSVP is designed to facilitate rapid communications. It provides early warning and response to emerging biological threats, as well as to emerging epidemics and diseases, by providing real-time clinical information about current symptoms, disease prevalence, and geographic location. RSVP provides a mechanism to inform health care providers about health alerts and to facilitate the process of collecting data on reportable diseases. RSVP is designed to overcome existing barriers to reporting suspicious or unusual symptoms in patients, and to capture clinician judgment regarding the severity of an illness and the likely category of the disease. RSVP fully supports on-line data entry, reducing the paperwork associated with reporting infectious diseases. RSVP immediately catalogs all reports in a summary, which is instantaneously available to local public health officials and physicians. Future plans: Development of neural networks and maps. Within HHS, six agencies work on bioterrorism issues. Combined, these agencies have a budget of $3.6 billion for bioterrorism in fiscal year 2004. HHS’s Office of the Assistant Secretary for Public Health and Emergency Preparedness will have $42 million in fiscal year 2004 to direct and coordinate the implementation of HHS’s bioterrorism programs and to support the Department of Homeland Security by providing health and medical leadership. CDC’s bioterrorism budget for fiscal year 2004 will be $1.1 billion, $940 million of which will fund CDC’s ongoing state and local preparedness program, which supports state surveillance and epidemiology capacity, laboratory capacity, communication and IT infrastructure, education and training, and health information dissemination. In addition, CDC has its own office, the Office of Terrorism Preparedness and Response, to coordinate efforts. CDC plans to upgrade its own system and laboratory capacity and to expand oversight of inter- laboratory transfers of dangerous pathogens and toxins, laboratory safety inspections, and anthrax research. The Health Resources Services Administration also provides grants to hospitals for bioterrorism preparedness and response. The Agency for Healthcare Research and Quality funded research on the use of information systems and decision support systems to enhance preparedness for the delivery of medical care in the event of a bioterrorist attack. FDA is increasing its food safety responsibilities by improving its laboratory preparedness and food monitoring and inspections in accordance with the Public Health Security and Bioterrorism Preparedness and Response Act of 2002. The National Institutes of Health is planning to implement its strategic plan for biodefense research and research agenda for CDC Category A, B, and C agents. HHS identified 28 information systems and supporting technologies. Type of system: Surveillance As part of CDC’s national influenza surveillance effort, CDC receives weekly mortality reports from 122 cities and metropolitan areas in the United States within 2-3 weeks from the date of death. These reports summarize the total number of deaths occurring in these cities/areas each week due to pneumonia and influenza. This system provides CDC with preliminary information with which to evaluate the impact of influenza on mortality in the United States and the severity of the currently circulating virus strains. The advantage of this system is that it provides timely data 2-3 years before finalized mortality data are available from CDC’s National Center for Health Statistics. Deaths are reported to CDC by place of occurrence, not by residence. This system is part of BioWatch. Active Bacterial Core Surveillance (ABCs) Type of system: Surveillance As part of CDC’s Emerging Infections Program, ABCs determines the incidence and epidemiological characteristics of invasive bacterial disease due to pathogens of public health importance, determines the molecular patterns and microbiological characteristics of disease-causing elements, and provides an infrastructure for nested special studies to identify risk factors and to evaluate prevention policies. ABCs is a population- and laboratory-based surveillance system. Est. FY 2003 IT cost: $87,372 Future plans: Measuring the impact of newly licensed vaccines on disease and drug resistance and harnessing molecular techniques to characterize bacteria. Type of system: Communications The Bioterrorism Event Notification system tracks emergency-related phone calls to CDC’s Emergency Preparedness and Response Branch, which maintains the 24-by-7 emergency contact numbers for CDC. The system provides a data set that can be used to quantify the number and types of incoming requests for emergency assistance. Border Infectious Disease Surveillance Project (BIDS) Type of system: Surveillance BIDS helps public health officials to better understand and detect important infectious diseases along the U.S.-Mexico border. The system conducts active, sentinel surveillance for syndromes consistent with hepatitis and febrile-rash illness at clinical facilities on both sides of the border. As an infectious disease surveillance system combining syndromal surveillance with appropriate laboratory diagnostic testing, BIDS can directly enhance bioterrorism surveillance in this key region. Est. FY 2003 IT cost: $35,000 Future plans: Expansion of the number of sites and syndromes and complete development of the next BIDS software version, involving Web-based data entry, which will be consistent with the National Notifiable Disease Surveillance System standards. Type of system: Surveillance CaliciNet is used to assist public health officials to more quickly identify contaminated food products associated with outbreaks by allowing for the linking of epidemiological and laboratory information from specimens that are collected as part of outbreak investigations for viral gastroenteritis. While caliciviruses are not on the CDC list of bioterrorism agents, they could be used in an attack. Future plans: CaliciNet will be replaced by a larger system, which is still in the process of being named. Type of system: Supporting technology DPDx uses the Internet to strengthen the level of laboratory professionals’ expertise in diagnosing foodborne and other parasitic diseases. DPDx offers reference and training and diagnostic assistance. Laboratory professionals can transmit images to CDC and obtain answers to their inquiries in minutes to hours. This allows them to more efficiently address difficult diagnostic cases in normal or outbreak situations and to disseminate information more rapidly. In addition, this method substantially increases the interaction between CDC and public health laboratories. Est. FY 2003 IT cost: $7,000 Future plans: Training and continuing education of laboratory professionals; provision to health facilities worldwide of diagnostic assistance by CDC staff supported, when needed, by experts from other institutions; diagnostic quizzes to assess the skills of laboratory professionals; and informal, early detection of unusually clustered, atypical, or emerging parasitic diseases. Plans also include ensuring communication and functionality with all state public health departments. Early Aberration Reporting System (EARS) Type of system: Communications EARS is a SAS-based, Web-enabled reporting tool that allows the analysis of public health surveillance data using aberration detection methods. Its goal is to assist public health officials in the early identification of disease outbreaks, as well as bioterrorism events. It assesses whether the current number of reported cases of an event is higher than usual. EARS provides results from its aberration detection analysis, as well as quick data summaries and graphs. Est. FY 2003 IT cost: $240,000 Future plans: Incorporating bioterrorism detection methods in future versions. Plans also include the implementation of a GIS system that will allow for maps of syndromic or disease events and the incorporation of additional methodologies. Electronic Foodborne Outbreak Reporting System (EFORS) Type of system: Surveillance EFORS replaces the Foodborne Disease Outbreak Surveillance System. EFORS enables a Web-based application for states to report foodborne outbreaks electronically rather than on the former paper-based system. Data are then used for annual summary reports and monitoring for multi-state outbreaks. Est. FY 2003 IT cost: $126,949 Future plans: Improving the database structure to allow immediate viewing of reports as changes occur. EFORS intends to provide data for estimates of the burden of foodborne illness by food commodity. Epidemic Information Exchange (Epi-X) Type of system: Communications Epi-X connects state and local public health officials so that they can share information about outbreaks and other acute health events, including those possibly related to bioterrorism. It is intended to provide epidemiologists and others with a secure, Web- based platform that can be used for instant emergency notification of outbreaks and requests for CDC assistance. Epi-X provides tools for searching, tracking, discussing, and reporting on diseases. EPI-X is being used in DHS’s BioWatch program. Est. FY 2003 IT cost: $1,382,199 Future plans: Increasing its user base to ensure rapid, secure communications at all levels of public health, such as linking to CDC’s Emergency Operations Center and to state and local public health departments. Plans also include linking with comparable state level systems, providing secure communication for multistate outbreak response teams, and automating the recognition of disease outbreaks across jurisdictions. Federal Facilities Information Management System (FFIMS) Type of system: Supporting technology FFIMS aids in collecting, managing, and analyzing data that originate outside the agency. Its primary use is as an investigative system to aid in public health assessments at specific sites. It has been most useful in the collection and analysis of voluminous environmental sampling data. FFIMS can be used to investigate an anomaly after it has been identified and to help determine the source of health outcomes or the potential risk of adverse health outcomes. Future plans: Addition of remote data collection and conversion to a Web-based application. Foodborne Disease Active Surveillance Network (FoodNet) Type of system: Surveillance As part of CDC’s Emerging Infections Program, FoodNet provides a network for responding to new and emerging foodborne diseases of national importance, monitoring the burden of foodborne diseases, and identifying the sources of specific foodborne diseases. It consists of active surveillance and a related epidemiological study, which helps public health officials better understand the epidemiology of foodborne diseases in the United States. Est. FY 2003 IT cost: $515,900 Future plans: Estimate the burden of foodborne illnesses in the United States, follow trends in the incidence of foodborne infectious disease, and attribute foodborne infections to specific food vehicles. Geographic Information Systems (GIS) Type of system: Supporting technology GIS tracks the spread of environmental contamination through a community, identifies geographic areas of particular health concern, and identifies susceptible populations. Among other things, GIS can be used to help identify spatial clustering of abnormal events as the data is collected. This can assist under emergency conditions by identifying affected areas, predicting dispersion of the agent, and sharing information with personnel who are responsible for incident management. Est. FY 2003 IT cost: $2,091,737 Future plans: Expansion of GIS services (e.g., for field-based use), integration with the Hazardous Substances Emergency Event System, and possible integration with CDC's NEDSS. Global Emerging Infections Sentinel Network (GeoSentinel) Type of system: Surveillance GeoSentinel is a Web- and provider-based sentinel network. It consists of travel/tropical medicine clinics around the world participating in surveillance to monitor geographic and temporal trends in morbidity among travelers and other globally mobile populations. Passive surveillance and response capabilities are also extended to a broader network of GeoSentinel Network members. Est. FY 2003 IT cost: $10,000 Future plans: Increasing the number and geography of involved clinics, expanding partnerships, and enhancing electronic infrastructure to include simultaneous conferencing in real time with all global sites in preparation for global disease outbreaks or bioterrorism threats. Hazardous Substances Emergency Event System (HSEES) Type of system: Surveillance HSEES collects and analyzes information on events involving hazardous substances as well as threatened releases that result in a public health action. Information about the chemical, victims, and event is recorded by state health departments and transmitted to CDC in near real time for analysis and dissemination of reports. It can be easily enhanced to collect biological agents in addition to chemical agents. Future plans: Inclusion of additional state health departments and integration with GIS. Type of system: Communications HAN is a nationwide system serving as a platform for the distribution of health alerts, dissemination of prevention guidelines and other information, distance learning, national disease surveillance, and electronic laboratory reporting, as well as for CDC's bioterrorism and related initiatives to strengthen preparedness at the local and state levels. Among other things, HAN is to provide early warning alerts and to ensure capacity to securely transmit surveillance, laboratory, and other sensitive data. Type of system: Surveillance The Influenza Sentinel Provider Surveillance System is one of four separate components that allows CDC to, among other things, detect changes in influenza and monitor influenza-like illness. It is accessible through the Internet and provides data on the circulation and impact of influenza year-round. It also provides information on new influenza strains in circulation that can be used to determine the components of the vaccine for the next influenza season and as a pandemic warning. Laboratory Information Tracking System (LITS Plus™) Type of system: Supporting technology LITS Plus™ is a laboratory data management system, which is used to enter, edit, analyze, and report laboratory test results electronically. Users can examine all the data about a specimen, including data from all laboratories that performed tests on the specimen. It provides seamless integration of laboratory data, including laboratory instrument data and incorporates extensive laboratory data management functionality. Future plans: Develop and implement standardized modules in LITS Plus™ for all CDC Category A bioterrorism labs and to comply with CDC’s Public Health Information Network. Laboratory Response Network (LRN) Type of system: Communications LRN is an integrated network of public health and clinical laboratories that provide laboratory diagnostics and disseminated testing capacity for public health preparedness and response. It ensures that all member laboratories collectively maintain state-of-the-art biodetection and diagnostic capabilities as well as surge capacity for all biological and chemical agents likely to be used by terrorists. LRN is based on the use of standard protocols and reagents, integrated data management, and secure communications. Est. FY 2003 IT cost: $502,500 Future plans: Update and revise laboratory protocols for biological and chemical agents on the LRN Web site; develop new screening assays for biological agents and obtain FDA approval for in vitro diagnostic use of new rapid screening assays; link to NEDSS; expand domestic partnership; and upgrade restricted Web site for interoperability and data exchange with key clinical entities. Type of system: Surveillance The National Botulism Surveillance system compiles information on cases of foodborne and wound botulism. CDC provides clinical, epidemiological, and laboratory consultation for suspected botulism cases 24 hours a day and is the only source for antitoxin in the United States. Also, CDC conducts a yearly survey of state and territorial epidemiologists and of state public health laboratory directors to identify additional cases that have not been previously reported. Est. FY 2003 IT cost: $2,000 Future plans: Use electronic near real-time reporting of botulism testing results, which will be integrated with reports of clinical consultations and antitoxin releases for suspect cases and for rapid case updates. Type of system: Surveillance The NEDSS base system is a component of CDC's overall NEDSS initiative. It will provide a NEDSS architecture-compliant option for states to use as a platform for disease surveillance. The NEDSS base system is a CDC-developed system that provides a platform upon which many public health surveillance systems, processes, and data can be integrated in a secure environment. It will provide the foundation for state and program area needs, data collection, and processing, including the development of modules that can be used for data entry and for management of core demographic and notifiable disease data via a Web browser. The first release supports the electronic processes involved in notifiable disease surveillance and analysis, replacing the functionality currently supported by the NETSS system. States also have the option to develop systems or elements on their own through the use of grants provided for this purpose rather than using the NEDSS base system. Est. FY 2003 IT cost: $27,609,000 Future plans: Additional functionality to support other programs, such as chronic disease and environmental health programs, for use by epidemiologists, laboratory personnel, and data managers from various program areas. Professional associations’ involvement includes the Association of State and Territorial Health Officials (ASTHO), the Association of Public Health Laboratories (APHL), the Council of State and Territorial Epidemiologists (CSTE), the National Association of Health Data Organizations (NAHDO), the National Association of County and City Health Officials (NACCHO), and the National Association for Public Health Statistics and Information Systems (NAPHSIS). National Electronic Telecommunications Systems for Surveillance (NETSS) Type of system: Surveillance NETSS provides weekly data regarding cases of nationally notifiable diseases. It serves a supportive role for bioterrorism-related surveillance allowing the transmission of limited epidemiological information describing cases of infectious disease that may or may not be related to bioterrorism. As needed, local and state health departments can use well-established, routine NETSS information exchange protocols to augment more focused or specific bioterrorism surveillance data exchange. FY 2002 IT cost: $586,301 (includes the cost for the National Notifiable Disease Surveillance System) Est. FY 2003 IT cost: $620,929 (includes the estimated cost for the National Notifiable Disease Surveillance System) National Molecular Subtyping Network for Foodborne Disease Surveillance (PulseNet) Type of system: Supporting technology PulseNet is an early warning system for outbreaks of foodborne diseases. It is a national network of public health laboratories that perform DNA "fingerprinting" on foodborne bacteria. It permits rapid comparisons of these fingerprint patterns through an electronic database and provides critical data for the early recognition and timely investigation of outbreaks. Est. FY 2003 IT cost: $235,000 Future plans: Expansion to include additional pathogens (including those that may be used by bioterrorists) and to facilitate the establishment of compatible networks in Europe, the Pacific Rim region, and Latin America. National Respiratory and Enteric Virus Surveillance System (NREVSS) Type of system: Surveillance/Communications NREVSS is a laboratory-based system that monitors temporal and geographic patterns associated with the detection of respiratory syncytial viruses (RSV), human parainfluenza viruses (HPIV), respiratory and enteric adenoviruses, and rotaviruses. Influenza specimen information, also reported to NREVSS, is integrated with CDC influenza surveillance. While these agents are not on the CDC list, they could be potentially used for bioterrorism. NREVSS is a Web-based and telephone dial-in system. Future plans: Replace the telephone dial-in functionality to be Web-based once all users have access capabilities. Type of system: Surveillance The plague surveillance system is comprised of clinical, epidemiological, and ecologic information on presumptive and confirmed cases reported by state public health departments. Basic descriptive statistical analyses are performed on these data, such as regional- and county-specific incidence rates. Plague is also one of three internationally quarantinable diseases, and, according to the International Health Regulations, all cases must be investigated and reported to the World Health Organization in Geneva. Future plans: Integrate with CDC’s bioterrorism preparedness programs. Public Health Laboratory Information System (PHLIS) Type of system: Surveillance PHLIS is designed for use in public health laboratories for the reporting and analysis of a variety of conditions of public health importance, which have a significant laboratory-testing component, e.g., salmonella. PHLIS reports standard demographic data that are associated with a laboratory isolate as well as laboratory test results, information about laboratory procedures, and outbreak- related information. Statistical Outbreak Detection Algorithm (SODA) Type of system: Surveillance SODA processes pathogen information (i.e., salmonella, shigella, and e. coli) on a daily basis to detect anomalies or unusual clusters in the reported versus expected counts at the state, regional, and national levels. Its main goal is to provide users with an interface to view reports, generate graphs and produce maps from the state, regional, and national perspectives. SODA utilizes a cumulative sums algorithm commonly used in the manufacturing industry. The output is a statistical measure that is flagged for review by CDC's foodborne staff. SODA uses general information from lab specimen data, such as date and location. Future plans: Addition of other pathogens for monitoring. Type of system: Surveillance As part of CDC’s Emerging Infections Program, the Unexplained Deaths and Critical Illnesses Surveillance System is expected to contain limited epidemiological and clinical information on previously healthy persons aged 1 to 49 years who have illnesses with possible infectious causes. It is also expected to provide active population-based surveillance through coroners and medical examiners at limited sites. National and international surveillance will be passive for clusters of unexplained deaths and illnesses. Est. FY 2003 IT cost: $37,290 Future plans: Further development of an integrated data management system for clinical, epidemiological, specimen tracking, and test results data, including novel diagnostics and pathogen discovery. Electronic Laboratory Exchange Network (eLEXNET) Type of system: Surveillance eLEXNET provides a Web-based system for real-time sharing of food safety laboratory data among federal, state, and local agencies. It is seamless and secure, allowing public health officials at multiple government agencies engaged in food safety activities to compare and coordinate laboratory analysis findings. It captures food safety sample and test result data from participating laboratories and uses them for risk assessment and decision-support purposes, improving early detection of problem products and enabling active food safety surveillance and evaluation of potential threats to the American food supply. Est. FY 2003 IT cost: $3,750,000 Future plans: Expanding participating food safety laboratory partnerships and developing an integrated short- and long-term strategic plan and communications planning approach. VA manages one of the nation’s largest health care systems and is the nation’s largest drug purchaser. The department purchases pharmaceuticals and medical supplies for the Strategic National Stockpile Program and the National Medical Response Team stockpiles. VA identified one information system. Type of system: Surveillance EPI identifies antibiotic-resistant and otherwise problematic pathogens within the Veterans Health Administration facilities. This information is used to help formulate plans on a national level for intervention strategies and resource needs. Results of aggregate data may also be shared with appropriate public health authorities for planning on the national level for the non-VA and private health care sectors. EPI provides general surveillance on specific pathogens and diseases. Future plans: Addition of new diseases or organisms as they are identified. EPA has responsibilities to prepare for and respond to emergencies, including those related to biological materials. EPA can be involved in detection of agents by environmental monitoring and sampling. EPA is responsible for protecting the nation’s water supply from terrorist attack and for prevention and control of indoor air pollution. EPA’s National Homeland Security Research Center is in the process of preparing an on- line virtual library of homeland security-related documents and tools intended to assist decision making during emergency situations. Data in the library will include exposure guidelines, databases, publications, and Web sites applicable to biological, chemical, and radiological threats. EPA identified five supporting technologies. Indoor Air Quality and Inhalation Exposure (IAQX) Type of system: Supporting technology IAQX is an indoor air quality simulation package that consists of a general-purpose simulation program and a series of stand-alone, special purpose programs. Relatively simple mass transfer models are provided by the general-purpose simulation program, and more complex models are implemented by the stand-alone, special purpose simulation programs. In addition to performing conventional indoor air quality simulations, which calculate the pollutant concentration and personal exposure as a function of time, IAQX can estimate the adequate ventilation rate when certain air quality criteria need to be satisfied. This feature is useful for product stewardship and risk management. Future plans: Addition of more special purpose programs, such as models for indoor air chemistry and indoor application of pesticides. Type of system: Supporting technology EPANET was developed to help water utilities maintain and improve the quality of water delivered to consumers through their distribution systems. It is a computer modeling software package that can be used to simulate drinking water distribution systems and to simulate water flow patterns in those systems. The model is also used to simulate contaminant dispersion patterns if chemical or biological contaminants are introduced into a water system. It can be used to inform water utilities where critical points (valves, pumps, etc.) are located in the system and what the impact of the system would be if those points were attacked. Type of system: Supporting technology RISK is an indoor air quality model developed by the Indoor Environment Management Branch of EPA’s National Risk Management Research Laboratory. It was developed as a tool to carry out the mission of the engineering portion of the EPA’s indoor air research program to provide tools necessary to reduce individual exposure to and risk from indoor air pollutants. RISK uses the concepts of buildings and scenarios, including fixed information about a building (the number of rooms, the room dimensions, and the arrangement of the rooms) and changing information sources (sinks, air exchange, room-to-room flows, etc.). The model provides risk, exposure, and concentration information. RISK allows analysis of the impact of multiple pollutants on the indoor environment. Future plans: Addition of more risk calculations and of models and suggested values for indoor particulate. Safe Drinking Water Accession and Review System (SDWARS) Type of system: Supporting technology SDWARS tracks monitoring results for specific lists of unregulated chemical contaminants to indicate occurrences in public drinking water systems. Public water systems submit Unregulated Contaminant Monitoring Rule (UMCR) data elements through SDWARS for inclusion in the National Drinking Water Contaminant Occurrence Database. SDWARS is a one-entry approach to the electronic reporting process to improve reporting quality, reduce reporting errors, and reduce the time involved in investigating and correcting errors at all levels (e.g., laboratories, states, and EPA). Future plans: Accommodate additional contaminants, including microbial contaminants. Safe Drinking Water Information System Federal (SDWIS/FED) Type of system: Supporting technology SDWIS/FED is a database designed and implemented by EPA to meet its needs in the oversight and management of the Safe Drinking Water Act. It contains public water system inventory information and summary violation data submitted by states and EPA regions in conformance with reporting requirements established by statute, regulation, and guidance. Future plans: Replace with a new drinking water data warehouse. In addition to the agencies’ individual systems that they identified, there are several other IT initiatives in process or being planned to better support agencies’ abilities to prepare for, respond to, and communicate during public health emergency events. These projects are intended to provide integration and interoperability among systems, improve communications, and better support the public health infrastructure. State, territorial, and local public health agencies and various public health- related professional associations The PHIN is an effort initiated by the CDC to provide interoperability across public health functions and organizations, such as state and federal agencies, local health departments, public health labs, vaccine clinics, clinical care, and first responders. It is intended to, among other things, (1) deliver industry standard data to public health, (2) investigate bioterrorism detection, (3) provide disease tracking analysis and response, and (4) support local, state, and national data needs. It builds on existing CDC investments from HAN, NEDSS, EPI-X, LRN, and the CDC Web. The PHIN will not replace any of these systems but will provide an “umbrella” to support the interoperability of existing CDC surveillance, communications, and reporting systems. State, territorial, and local public health agencies and various public health- related professional associations In fiscal year 2001, CDC implemented the NEDSS architecture project to replace or enhance the interoperability of its numerous existing surveillance systems. NEDSS promotes the use of data and information standards to advance the development of efficient, integrated, and interoperable surveillance systems at the federal, state, and local levels. When completed, NEDSS will electronically integrate a wide variety of surveillance activities and will facilitate more accurate and timely reporting of disease information to CDC and state and local health departments. NEDSS is also designed to reduce provider burden in the provision of information and enhance both the timeliness and quality of information provided. The NEDSS architecture will include (1) data standards, (2) an Internet-based communications infrastructure built on industry standards, and (3) policy-level agreements on data access, sharing, burden reduction, and protection of confidentiality. National Environmental Public Health Tracking Network (NEPHTN) The NEPHTN is a collaborative effort between CDC and EPA to develop a national environmental tracking network that will (1) be standards-based; (2) allow direct electronic data reporting and linkage within and across health effect, exposure, and hazard data; and (3) be interoperable with other public health systems. Environmental public health tracking is the ongoing collection, integration, analysis, and interpretation of data about: environmental hazards, exposure to environmental hazards, and health effects potentially related to exposure to environmental hazards. The goal of environmental public health tracking is to protect communities by providing information to federal, state, and local agencies. These agencies then use this information to plan, apply, and evaluate public health actions to prevent and control environmentally related diseases. Currently, no systems exist at the state or national levels to track many of the exposures and health effects that may be related to environmental hazards. The FACTS initiative establishes an agencywide, integrated information management and data-sharing resource. It is intended to replace existing stovepipe application systems with a suite of components that can interact with each other and share data. FACTS is a technology suite composed of a centralized database that will (1) unite several smaller databases and projects that are interrelated and (2) provide a central point of access that will decrease data redundancy and inaccuracy. FACTS’ main purpose is to support the FSIS mission by substantially improving the ability to provide information that is accurate, complete, and timely for use by agency decision makers. Although this initiative will not consolidate all food safety information systems into one system, it will allow interoperability between systems in USDA agencies and at the U.S. Customs Service. In addition, FSIS and APHIS will take major steps toward establishing an integrated data-sharing effort that will specifically define the roles of each agency and will better safeguard the United States against foreign animal diseases and food safety hazards. Biological Defense Initiative (BDI) DTRA was executing the BDI program to determine the value of integrating systems with each other. This program was intended to deliver a national model for biological incidents detection capabilities and to integrate and synthesize information from exiting detectors and surveillance systems, such as BASIS, Portal Shield, RSVP, ESSENSE, and B-Safer. The intended partners in the BDI were to be CDC, Veterans Health Administration, NIH, USDA, and Interior’s Fish and Wildlife Service. However, the scope of the project was drastically narrowed as a result of funding reductions—from $215 million dollars to $29 million dollars. BDI has recently been cancelled. Navy, Army, DTRA, and civilian and academic partners EOS is a DTRA-supported initiative that leverages and tests existing and emerging biodefense technologies within a real-world testbed. The objectives of the EOS project are to (1) develop a scalable biodefense system for early threat warning, rapid threat identification, focused disease treatment, and outbreak containment and (2) enable the use of emerging technologies for testing, verification, and validation in a real-world, testbed environment. EOS is currently used to identify epidemics of infectious respiratory disease among USAF basic military trainees. It is the first diagnostic platform using DNA-based microarray technologies to be tested, verified, and validated. Walter Reed Army Institute for Research, academic and commercial partners Bio-ALIRT is being developed by DARPA to scientifically determine the value of nontraditional data sources, such as human behavior, to enable the detection of a biological outbreak from artificial or natural causes up to two days earlier than with traditional means. The Bio-ALIRT program will continue to monitor nontraditional data sources, such as animal sentinels, behavioral indicators, and prediagnostic medical data, to determine which could effectively serve as early indicators of a biological pathogen release. Data sources and algorithms will be evaluated in testbeds. The knowledge and technology developed from the testbeds would be suitable for use in any syndromic surveillance system. Future plans for Bio-ALIRT include development of new techniques, such as advanced data fusion, detection, and privacy protection algorithms, to differentiate between naturally occurring and deliberate bio- releases. Program for Response Options and Technology Enhancements for Chemical/Biological Terrorism (PROTECT) PROTECT’s objective is to protect people in public facilities, such as subways and airports, from chemical attacks. It is intended to addresses vulnerabilities of civilians that were highlighted in the 1995 chemical agent attack in the Tokyo subway system. PROTECT rapidly detects the presence of a chemical agent and transmit readings to an emergency management information system. It demonstrates the use of integrated systems for the defense of infrastructure facilities. PROTECT does not currently have a bioagent use; however, it can provide a near-term solution for 24-by-7 facility monitoring for airborne biological agent releases. PROTECT is a DOE Domestic Demonstration and Application Program (i.e., a prototype system to address specific problems in order improve infrastructure facility protection). The program takes advantage of recent advances in technology to prepare for and respond to attacks in subways, airports, and office buildings where people are concentrated. PROTECT is jointly funded by DOE and the Department of Justice. National Food Safety Laboratory System (NFSLS) USDA/APHIS, DOD/Army, selected state food laboratories The NFSLS is a newly initiated project to integrate systems for sharing information. It is currently a pilot program involving federal food laboratories at FSIS, FDA, the Army, and state food laboratories in Tennessee, Florida, New Hampshire, Massachusetts, and municipal food laboratories in Milwaukee, Wisconsin, and Cincinnati, Ohio. The program will also focus on the assurance of rapid sharing of reliable data through FDA’s e-LEXNET system. USDA and HHS will collaborate with federal, state, and local agencies to: (1) provide a national seamless data exchange system for food laboratory information; (2) provide an infrastructure that is portable, intuitive, and ready to exchange data from state, local, and federal databases and varying internal network designs; (3) enhance communication and collaboration among food safety partnerships; (4) provide the ability to detect, compare, and communicate current findings in food laboratory analysis; and (5) demonstrate that multiple agencies engaged in food safety regulatory activities could leverage the resources necessary to achieve the common goal of reducing the incidence of microbial foodborne illness. The purpose of the National Infrastructure Project is to strengthen CDC’s infrastructure and network management in order to help ensure continuity of operations for the NCEH during emergencies. Its objectives are to achieve zero latency on all network operations and to provide redundancy and higher network uptime. The center is implementing cluster technology to help achieve redundancy without latency, thus increasing the reliability of the network. Storage area networks are being used to provide logical and physical disk drives with connected servers. Other commercial tools are used to monitor the network and detect problems before they occur. NCEH is also purchasing UPS paging to allow early detection of problems within the facility. For example, pagers will go off whenever water sensors or smoke detectors are activated. NCEH has a triage plan, which includes the use of E-mails, pagers, and phone calls combined with paging systems. FIRE is an initiative to develop an internet-based research exchange system for laboratories and government agencies. It is intended to allow the sharing of biothreat information over a secure VPN. It is anticipated that the system will be able to tie identified bioagent strains to particular organizations based upon previous identification of strains and their origins. Molecular Recognition-based Real Time Detection The Molecular Recognition-based Real Time Detection initiative is intended to develop new sensors for biological and chemical warfare agents. The work may provide more specific and sensitive sensors, having very low or no false positives that can be used to collect samples and provide data to information systems. Future plans include the development of single receptors for multiple bioagents or for a combination of biological and chemical agents. HL7 is an ANSI-accredited standards development organization that creates message format standards. Version 2.3 provides a protocol that enables the flow of data between systems. Version 3.0 is being developed through the use of a formalized methodology involving the creation of a Reference Information Model to encompass the ability, not only to move data, but to use data once it is moved. LOINC is a set of code standards that identifies clinical questions, variables, and reports. It comprises a database of 15,000 variables with synonyms and cross-mappings; it covers a wide range of laboratory and clinical subject areas. The formal structure has six parts: component, property measured, time aspect, system, precision, and method. SNOMED is a nomenclature classification for indexing medical vocabulary, including signs, symptoms, diagnoses, and procedures; it defines code standards in a variety of clinical areas called coding axes. It can identify procedures and possible answers to clinical questions that are coded through LOINC. The National Library of Medicine developed UMLS as a standard health vocabulary that enables cross-referencing to other terminology and classification systems and includes a metathesaurus, a semantic network, and an information sources map. Its purpose is to help health professionals and researchers retrieve and integrate electronic biomedical information from a variety of sources, irrespective of the variations in the way similar concepts are expressed in different sources and classification systems. CIPHER’s objective is to establish standards for the data used in surveillance, to allow for a consistent definition and a consistent implementation across programs. The following objectives have been defined for CIPHER: (1) establish consistent definitions for information collected and used by surveillance systems; (2) define standards for how questions are to be formatted and information is to be collected on surveillance case report forms; (3) identify standards for the processing of data in electronic data entry systems, including value/label displays, reference table look-ups, and a minimum level of edit-checking; (4) identify storage standards; (5) provide guidance on electronic data interchange; and (6) provide guidance on coding for the display of data in statistical analyses and reports. government roles in dealing with public health emergencies, using a graphic to further illustrate the different roles. In this section, we have attempted to make a clear distinction between federal responsibilities and the responsibilities of other entities involved in responding to the release of a biological agent. 2. As we stated in our report, the Consolidated Health Informatics Initiative is an interagency work group lead by HHS, which recently announced the first set of standards. While we are encouraged by the interagency coordination involved in this initiative, additional work is still needed—in defining activities for ensuring further coordination and consensus on the adoption and use of additional standards, in establishing milestones for defining and implementing all standards, and in creating a mechanism to monitor the implementation of these standards throughout the health care industry. We recognize that the adoption of standards is an issue for the entire health care industry. 3. In response to these comments, we have added information on HHS’s cooperative agreements with states and local governments to the background section of the report. 4. We have included information we received about PHIN in appendix X. 5. We agree with HHS that IT is one of several components that support the core activities of public health surveillance; we discussed this in the Agency Comments and Our Evaluation section of the report. 6. While FoodNet may be a collaborative scientific activity for surveillance of foodborne diseases, it also includes an IT component for data exchange, which was reported to us by CDC officials. In addition to those named above, Larry E. Crosland, Neil J. Doherty, Amanda C. Gill, Pamlutricia Greenleaf, Joanne Fiorino, M. Saad Khan, Teresa F. Tucker, and Caroline C. Villanueva, made key contributions to this report. | The October 2001 anthrax attacks, the recent outbreak of the virulent Severe Acute Respiratory Syndrome (SARS), and increased awareness that terrorist groups may be capable of releasing life-threatening biological agents have prompted efforts to improve our nation's preparedness for, and response to, public health emergencies--including bioterrorism. GAO was asked, among other things, to identify federal agencies information technology (IT) initiatives to support our nation's readiness to deal with bioterrorism. Specifically, we compiled an inventory of such activities, determined the range of these coordination activities with other agencies, and identified the use of health care standards in these efforts. The six key federal agencies involved in bioterrorism preparedness and response identified about 70 planned and operational information systems in several IT categories associated with supporting a public health emergency. These encompass detection (systems that collect and identify potential biological agents from environmental samples), surveillance (systems that facilitate ongoing data collection, analysis, and interpretation of disease-related data), communications (systems that facilitate the secure and timely delivery of information to the relevant responders and decision makers), and supporting technologies (tools or systems that provide information for the other categories of systems). For example, the Centers for Disease Control and Prevention (CDC) is currently implementing its Health Alert Network, an early warning and response system intended to provide federal, state, and local agencies with better communications during public health emergencies, and the Department of Defense is using its Electronic Surveillance System for the Early Notification of Community-based Epidemics to support early identification of infectious disease outbreaks in the military by comparing analyses of data collected daily with historical trends. The extent of coordination or interaction of these systems among agencies covered a wide range--from an absence of coordination, to awareness among the agencies with no formal coordination, to formal coordination, to joint development of initiatives. IT can more effectively facilitate emergency response if standards are developed and implemented that allow systems to be interoperable. The need for common, agreed-upon standards is widely acknowledged in the health community, and activities to strengthen and increase the use of applicable standards are ongoing. For example, CDC has defined a public health information architecture, which identifies data, communication, and security standards needed to ensure the interoperability of related systems. Despite these ongoing efforts to address IT standards, many issues remain to be worked out, including coordinating the various standards-setting initiatives and monitoring the implementation of standards for health care delivery and public health. An underlying challenge for establishing and implementing such standards is the lack of an overall strategy guiding IT development and initiatives. Without such a strategy to address the development and implementation of standards, agencies may not be well positioned to take advantage of IT that could facilitate better preparation for and response to public health emergencies--including bioterrorism. |
Various pieces of DOD guidance provide overall direction and require the services to define medical deployment standards to ensure that servicemembers deploying to a theater of operations are in optimal health. DOD allows the military services to deploy servicemembers who do not meet the services’ medical standards under certain conditions. For example, a service is required to obtain a waiver from the Combatant Command Surgeon if the service wishes to deploy a servicemember who does not meet deployment standards and can receive medical treatment at deployed locations that will render them fit for duty. DOD guidance requires the services to continue to employ measures that ensure servicemembers are medically and psychologically fit for worldwide deployability, taking into account additional guidance provided by the combatant commander on theater-specific medical limitations. The Assistant Secretary of Defense for Health Affairs is planning to release new guidance that provides more guidelines on medical conditions that, in general, should preclude servicemembers from being deployed. Because DOD has not determined the issue date and has not yet implemented this new guidance, we were not able to evaluate its effect during our review. The Offices of the Surgeon General of each military service have established procedures to evaluate the health conditions of their servicemembers according to service-specific medical standards. Our prior work has shown that the Army, Air Force, Navy, and Marine Corps all have different methods of assessing their servicemembers’ medical readiness prior to deployment and documenting any medical conditions and limitations. The Army’s guidance, similar to the other services’ guidance, allows the commander to have the ultimate authority to deploy servicemembers and make proper duty assignments, if certain procedures are followed, while taking into account the medical provider’s assessment of a servicemember’s medical condition and duty limitations. The Army Office of the Surgeon General and Army Deputy Chief of Staff (G-1) provide guidance on soldiers’ medical readiness. Regarding medical matters, the Army Office of the Surgeon General heads the Army Medical Command, which provides guidance to Army medical treatment facilities. Medical Evaluation Boards (MEB) of soldiers are conducted at medical treatment facilities at Army installations. Regarding command matters, the Army Manpower and Reserve Affairs Office works with the Army Deputy Chief of Staff G-1 to provide guidance to human resource directorates at each installation. The Deputy Chief of Staff G-1 has overall responsibility for the Physical Performance Evaluation System which involves an administrative screening board known as the Military Occupational Specialty Medical Retention Board (MMRB). Army Regulation 40-501 requires that the Army document physical and mental conditions that may limit a soldier’s ability to perform his or her duties on the physical profile form. Using the physical profile, Army medical providers, who serve as profiling officers, provide recommendations on a soldier’s medical limitations in order to assist the commander in properly assigning the soldier to duties that contribute to the unit’s mission. A profiling officer creates a physical profile that documents any limitations found during a medical examination, and identifies whether the medical limitation is temporary, in which case a short-term condition can be improved by further treatment, or permanent, in which case a chronic condition will not improve with medical treatment at that point in time. The profiling officer classifies the medical limitations under six categories: physical capacity upper extremities lower extremities These categories are often abbreviated as the “PULHES” factors (see app. III for further detail). The medical limitations in physical profiles are also given a numerical designation from 1 to 4 to reflect the different levels of functional capability and severity of impairment. Soldiers with physical profiles designated by the number 1 are considered to have a high level of medical fitness; a 2 indicates that a soldier has some medical condition or physical defect that may require some activity limitations; a 3 under one or more of the factors indicates that the soldier has a medical condition or physical defect that may require significant limitations in duty assignment; and soldiers designated by the number 4 must have their military duties drastically limited. Profiling officers must also specify whether the soldier can perform certain functional activities comprising the minimum requirements needed in order to be medically qualified for worldwide deployment. Profiling officers should evaluate a soldier who has a temporary profile at least once every 3 months to determine whether the soldier’s medical condition has improved or, if not, whether an extension of up to 12 months is needed. If an extension is needed beyond 12 months, a temporary profile should be changed to a permanent profile. Permanent and temporary profiles normally require the signature of only the profiling officer. Both the signatures of the profiling officer and a higher level medical provider, who is designated the approving authority, are required when a permanent profile number is designated at 3 or 4, or when a permanent profile designation has been changed from a 3 to a 2. According to profiling officers, during the preparation of the physical profile and medical evaluation of the soldier, the profiling officer may communicate with the commander of the soldier for the purpose of better identifying the soldier’s medical limitations. All permanent physical profiles are coded to designate any assignment limitations, including whether a soldier has been reviewed by an MMRB or a Physical Evaluation Board. Once the physical profile is signed by profiling officer, and approved by the designated approving authority as needed, Army regulation 40-501 requires that the completed physical profile should be retained in the soldier’s medical record and copies of it should be distributed to the unit commander and the soldier. For permanent physical profiles, one more copy is distributed to the military personnel office. Army medical records comprise both hard copy documents and an electronic system called the Armed Forces Health Longitudinal Technology Application (AHLTA), the official system for retaining soldiers’ medical documentation. AHLTA is used DOD-wide and gives medical providers access to soldiers’ medical information, including medical evaluation history, prescriptions, diagnostic tests, and physical profile information. The Army also tracks soldiers’ medical readiness information through the Army Medical Protection System (MEDPROS), in order to allow commanders to have access to soldiers’ medical information that might affect readiness, but this system retains limited information only on permanent physical profiles and does not supply any detailed description of medical limitations or incapacity to perform functional activities. Because physical profiles merely represent medical recommendations made by the profiling officer to a soldier’s commander, physical profile designations do not automatically determine whether a soldier is deployable or not. Three Army regulations require higher levels of review for soldiers with a numerical designation of at least a 3 in order to assist commanders in properly assigning soldiers to duties suitable to their medical limitations. Army guidance states that once soldiers receive a permanent profile designation of at least a 3, they are not deployable for the duration of the MMRB or MEB until the board is concluded. If a soldier receives a permanent profile of at least a 3, the profiling officer and approving authority must provide an initial determination of whether the soldier meets Army medical standards or not. If they believe that a soldier meets medical standards, Army regulation 600-60 requires that the soldier be reviewed by an MMRB to determine whether the soldier is able to complete the duties in his or her job assignment or needs to be reassigned to a job that accommodates his or her limitations. The MMRB consists of five voting members, including a medical provider, a senior commander, and when reasonably available, soldiers of the same branch or specialty as the soldier being evaluated as well as non-voting members including a personnel advisor, a recorder, and anyone else to ensure a fair hearing. Once the personnel office receives the permanent profile from the medical administrative office and convenes an MMRB, the recorder will assemble the soldier’s personnel records and medical records. The commander will prepare an evaluation of the impact of the profile limitations on the soldier’s ability to perform the full range of duties in the soldier’s job assignment, known as a Military Occupational Specialty (MOS). During the MMRB, the personnel advisor will summarize the details of the soldier’s current MOS and common duties, and the medical provider will brief the MMRB on the soldier’s physical profile. The soldier will also present facts or call witnesses relevant to his or her physical performance, current MOS retention, or MOS reclassification preference. The MMRB can recommend either that (1) the soldier remain in the Army under his or her current military occupational specialty or specialty code, (2) the soldier be placed in probationary status for up to 6 months to improve the condition of a disease or injury, (3) the soldier be reclassified into another occupational specialty, or (4) the soldier be referred to the MEB for medical disqualification processing. Active component Army soldiers should appear before an MMRB within 60 days of the date the physical profile is signed by the medical provider who is designated the approval authority. Army regulation 600-60 requires that personnel officials responsible for convening the MMRB maintain statistics on each case in order to assess whether or not MMRB evaluations are convened within the 60-day time limit. As of March 2008, officials now are required to report the statistics to the Deputy Chief of Staff of the Army. Alternatively, if a profiling officer and the approving authority believe that a soldier with a permanent profile designation of at least a 3 does not meet medical standards, Army regulation 40-501 requires that the soldier should be reviewed by an MEB to fully ascertain the soldier’s medical condition and limitations. From the MEB results, a subsequent Physical Evaluation Board determines whether the soldier is to be retained in the Army or not, and the applicable disability rating. There are two ways in which an MEB is initiated: by referral from the medical provider designated as the approving authority or by referral from an MMRB. When an MEB is referred by an approving authority, the soldier’s physical profile is distributed to the Physical Evaluation Board liaison officer at the medical treatment facility, who is responsible for the case management of the soldier. A medical provider reexamines the servicemember and reviews his or her medical history, including prior test results, diagnoses, and treatments. The medical provider will then complete a narrative summary to document the nature and degree of severity of the soldier’s condition. The commander also provides a letter describing how the soldier’s medical condition affects job performance and deployability status. Also provided is a summary of the soldier’s chief complaint, stated in the soldier’s own words. MEBs are composed of two or more physicians, one being a senior medical provider with detailed knowledge of Army medical standards and procedures, and other members having familiarity with these matters. MEB evaluations must be completed within 90 days of approval of the physical profile, or of the date when the MMRB referral is received by the liaison officer. The MEB could result in several outcomes, including: (1) the soldier is returned to duty, with a profile marked that he or she meets medical retention standards; or (2) the soldier is referred to a Physical Evaluation Board to determine whether he or she has lost the ability to perform assigned duties because of a medical condition and thus is unfit for duty, or the soldier is fit for duty and thus is retained in the Army. An Army memorandum requires that the liaison officers track certain statistics and use an electronic database system to ensure that MEB evaluations are completed within 90 days. This information is reported quarterly to the Deputy Under Secretary of Defense for Military Personnel Policy. According to a DOD instruction, within 60 days prior to deployment, soldiers complete a pre-deployment health assessment form to reflect soldiers’ medical readiness with respect to immunizations, dental, hearing/eye exams, and medical limitations on physical profiles. If a soldier indicates on the pre-deployment health assessment form that he or she is on a profile, or light duty, or undergoing a medical board, the soldier is referred to a medical care provider for reevaluation and verification of the medical limitations under the physical profile. If a soldier does not meet the medical requirements under the pre-deployment health assessment, the soldier is classified as not deployable, until the soldier receives further treatment. Moreover, if a soldier is also undergoing an MMRB or MEB, the soldier is considered not deployable until the evaluation is completed and the soldier is found fit for duty. The pre- deployment health assessment is updated to indicate that the soldier is deployable once he or she receives treatment or undergoes a board screening and is found fit for duty. Under Army regulation 40-501, Army commanders have the ultimate authority to deploy soldiers, but commanders are required to recognize soldiers’ limiting conditions and assign them duties consistent with their limiting conditions, with the assistance of personnel management officers from Army Forces Command and Human Resources Command. The Army allows commanders to deploy soldiers who have medical conditions that may require significant limitations in duty assignment as long as they meet requirements in the guidance, including board evaluations, suitable duty assignments, and available medical treatment in deployed locations, if needed; however, the Army is not meeting all requirements to ensure board evaluations are conducted within prescribed time frames, and various problems exist with regard to physical profile record keeping. Army requirements for deploying soldiers with medical conditions are not always being met; commanders are not always aware of medical limitations in a timely way, and in the sample review, we found that commanders are not always adhering to guidance to ensure that soldiers are not being deployed to Iraq or Afghanistan prior to having needed MMRB or in some cases MEB evaluations. Furthermore, the Army continues to have problems with retention and completeness of its physical profiles, as well as a lack of consistency in designations with regard to soldiers’ abilities to perform functional activities. While we did not find widespread revision of profiles prior to deployment, we found that soldiers were concerned about how the Army was addressing their medical problems prior to deployment. While commanders may recognize medical limitations on a case by case basis, without performing required medical board evaluations, the Army lacks a method for ensuring that all such cases are appropriately recognized. While Army guidance allows commanders to deploy soldiers with medical conditions that may require significant limitations in duty assignments, subject to certain requirements, we found that commanders are not always aware of soldiers’ medical limitations when making deployment decisions, and they do not always adhere to these requirements. Army guidance requires that whenever a new physical profile is created, copies of physical profile documentation, once authorized by the approving medical authority, should be added to a soldier’s medical record and given to the soldier, his or her commander, and the command’s personnel office. Army guidance stipulates that soldiers with a permanent profile containing a numerical designation of a 3 or 4 who meet Army medical retention standards should be evaluated by an MMRB within 60 days of receiving the approved physical profile, to determine whether the soldier is able to complete all the duties in his or her current job assignment or should alternatively be reassigned to a job that accommodates his or her medical limitation(s). Alternatively, a soldier with a permanent profile of a 3 or 4 who is believed by a profiling officer not to meet medical standards must be evaluated by an MEB within 90 days to determine whether that soldier should be retained in the Army. Moreover, within 60 days prior to deployment, DOD guidance requires the Army to review soldiers for medical readiness. During this pre-deployment assessment, soldiers who report having a physical profile must be referred to a medical provider, which according to medical providers may result in an updated confirmation of their numerical designation. If a soldier receives a new profile indicating a medical condition that may require significant limitations in assignment, Army guidance categorizes the soldier as not deployable until he or she is reviewed by an MMRB or in some cases MEB. Commanders, with the assistance of personnel management officers, are responsible for determining proper duty assignments for soldiers based on their knowledge of soldiers’ physical profiles, assignment limitations, and the need for accomplishing necessary duties within the soldiers’ MOS. Commanders may also consider the availability of medical treatment at deployed locations when determining the deployability of soldiers with physical profiles. At Forts Benning, Stewart, and Drum, we found that commanders are not always adhering to requirements in Army guidance to ensure that needed board evaluations are performed. After reviewing 685 medical records and the deployment information of soldiers who were preparing for deployment in the statistically valid sample, we estimate that 6 percent of soldiers from Forts Benning, Stewart, and Drum were deployed with designations of permanent 3 in their physical profiles—signifying to a commander that they have medical conditions that may require significant limitations. These soldiers should have been reviewed prior to deployment by a MMRB, or MEB as needed, in accordance with Army regulations. Further, we estimate that about 3 percent of the soldiers from Forts Benning, Stewart, and Drum had profiles that indicated that they met medical retention standards and required an MMRB, or may not meet standards and required an MEB, but were deployed without having been reviewed by an MMRB or MEB. Figure 1 summarizes percentages (and confidence intervals) of soldiers with profile designations of permanent 3 who deployed from Forts Benning, Stewart, and Drum, and the percentage of those soldiers who did not receive evaluation by an MMRB or MEB prior to deployment. In our sample, we found that of the 42 soldiers who had profile designations of permanent 3, 17 soldiers did not receive needed board evaluations prior to their deployment. Although we could project this as a percentage of the soldiers from Forts Benning, Stewart, and Drum, we did not project this as a percentage of the 42 soldiers who had profile designations of a permanent 3 because the size of this subgroup in the sample is not sufficient to report a reliable confidence interval for a population estimate. Table 1 shows the number of soldiers in the sample with permanent physical designations of 3 who did not receive pre- deployment evaluations by MMRB or MEB. These needed evaluations may not be occurring because each of the three installations lacked an enforcement mechanism to ensure all procedures are followed. According to medical providers, commanders, and personnel officials, in some cases soldiers do not receive their MMRB or MEB evaluations because profiles were not distributed by the approving authority or medical administrative office in time to inform commanders of the existence of the profiles. In other cases, according to personnel officials, commanders were given notice of the profiles but did not take needed action on time, but we were not able to determine why this occurred. Moreover, we found that while Army personnel officials at the three installations we visited were maintaining proper data on MEB evaluations, they were not maintaining required statistics on the performance of MMRB evaluations. Army guidance requires that medical and personnel officials have to maintain certain statistics in order to know whether MEB or MMRB evaluations are conducted within set time frames. Personnel officials told us that they kept informal data on each MMRB case in separate files, such as the date of the approved profile, the date it was received, and the date of the MMRB. However, this information was not summarized as would be needed in order to calculate the period of time that elapsed between the stages of MMRB evaluations. Prior to February 2008, the Army did not require that these statistics be reported to anyone. The Army revised its regulation 600-60 to require the reporting of quarterly statistics to the Deputy Chief of Staff of the Army beginning in March 2008. That change may lead to better oversight of the timeliness of the MMRB, but we were not able to assess the impact of this recent change during this review. Without performing all required medical board evaluations or tracking the timeliness of board evaluations, the Army lacks a systematic method for confirming that commanders recognize all cases of medical limitations and assign soldiers to duty assignments that suitably accommodate them. Medical records are intended to provide a soldier’s history of medical treatment and limitations, and Army regulation 40-501 requires that once physical profiles are prepared and signed, the profiles should be kept in a soldier’s medical record. These completed profiles include the numerical designation, a description of medical limitations, the signature of the profiling officer and approving authority, as needed, and the dates of the signatures. Medical records comprise both the hard copy and electronic versions of medical information. Commanders use physical profiles to assess soldiers’ physical ability to perform their duties. When we compared records in the official electronic medical system, AHLTA, and hard copy records with those in an electronic medical readiness system, MEDPROS, we found that 213 physical profiles were missing from the 685 medical records of soldiers in the sample who had a medical condition that may require significant limitations at Forts Benning, Stewart, and Drum. Further, of the physical profiles that were retained in the sample of medical records of soldiers with medical conditions that may require significant limitations, we found that 20 were not complete. Specifically, both hardcopy and electronic medical records lacked profiles with the appropriate signatures and dates of final approval. These problems may be occurring because each installation uses its own informal process for approving and distributing completed physical profiles to the soldier, commander, and medical record. For example, at Forts Benning and Stewart, a profiling officer would consult with the soldier and his commander in creating the profile, and if the physical profile were permanent and designated a 3 or 4, the medical provider who created the profile would provide it to the approving medical provider. The approving medical provider would then provide it to personnel officials in order to initiate an MMRB or to the liaison officer to initiate an MEB, if needed, and would also provide it to the medical administrative office, to be retained in the medical record. Officials did not strictly adhere to time frames during this process, and personnel officials expressed doubt to us as to whether they received all physical profiles. Medical and command officials at Fort Drum stated that their process was also informal and they did not strictly adhere to timeframes, but they retained hard copies of all permanent physical profiles separate from the soldiers’ medical records at the liaison officer’s administrative office. Without a systematic method for approving and distributing profiles, current informal processes have led to inconsistencies in retention of the physical profiles in the medical record. The electronic personnel system also contains medical information, and we found that it is not being routinely updated. As a result, communication to commanders about physical limitations in many cases comes from the soldiers themselves, rather than the medical record system or personnel system. Army officials intend to require that all physical profiles be processed and retained in the AHLTA electronic medical system; however, steps have not been taken to implement the system change. The system change will require that physical profiles be approved and routed electronically to commanders, medical providers, and the personnel offices to initiate MEB and MMRB proceedings. This change is intended to correct the limited visibility over profile information and inconsistencies in profile procedures, similar to the issues we have found in this review at Forts Benning, Stewart, and Drum. However, Army officials told us they have not finalized plans for actions needed and associated milestones to implement these changes. Moreover, current plans do not ensure that the information will be entered and distributed in a timely manner, as officials who convene the MMRB or MEB do not have authority to compel timely system input by commanders and medical providers. Finally, the Army is not consistent in its use of numerical designations in profiles to reflect a soldier’s ability to perform certain functional activities. Army guidance states that when soldiers are not able to meet certain requirements they are given a numerical designation of at least 3, and this designation should result, in most instances, in a review of their cases by an MMRB or MEB. When profiling officers prepare physical profiles carrying a designation of 2, these profiles do not generally receive further review, until the soldier indicates he or she is under a physical profile at the pre-deployment assessment. Based on our random projectable sample of soldiers preparing to deploy between April 2006 and March 2007, we estimate that about 7 percent of the soldiers who were preparing for deployment at Forts Benning, Stewart, and Drum had physical profiles in their medical record showing the inability to perform functional activities yet were not designated with a score of at least 3. Figure 2 shows the estimated percentage (and confidence intervals) of soldiers by Army installation who had profiles that indicated that they were unable to perform certain functional activities, yet the profiles had a designation of 2. The physical profile form defines performance of functional activities according to whether the soldier is: (1) able to carry and fire his or her individually assigned weapon; (2) able to move a fighting load of 48 pounds for at least 2 miles; (3) able to wear his or her protective mask and all chemical defense equipment; (4) able to construct an individual fighting position; (5) able to perform 3-5 second rushes under direct or indirect fire; and (6) healthy, without any medical condition that prevents deployment. Army regulation 40-501 allows for some flexibility in the medical provider’s designation of numerical designation in a soldier’s profile, and according to medical providers, they may upgrade designations based on their knowledge of the soldier’s medical condition and the soldier’s capacity to handle medical limitations. However, discretionary upgrades can mask a soldier’s limitations such that a commander might deploy the soldier without benefit of MMRB evaluation and may place the soldier in duties unsuitable to his or her limitations. We did not find widespread revision of profiles by profiling officers or approving authorities prior to deployment. Only 1 percent of the physical profiles we reviewed were changed from a permanent 3 to 2 within a few months prior to the soldier’s deploying. Upgrades in numerical designations are generally annotated by remarks in the descriptive text included in a soldier’s profile, and they must include a second approving medical provider’s signature. However, informal discussions between soldier and medical provider can result in a change in the profile designation that may not be noted in the profile. In one case, we found that a soldier’s profile was changed from a 3 to a 2 without meaningful annotation, and lacking the requisite second approving signature. This soldier reported to us that she had not undergone a new medical diagnosis prior to the profile upgrade; however, she also had told her medical provider that she did not want to go through an MMRB or MEB and thereby risk being removed from the Army. According to Army officials, soldiers’ medical conditions may have improved for various reasons, such as undergoing surgery or additional physical therapy. Although we found no evidence of widespread revision in numerical designations, in our surveys to deployed soldiers or our interviews with Army personnel officials and family members of deployed soldiers, some soldiers or family members expressed concerns to us that they were uninformed about how the Army was addressing their medical problems prior to deployment, and they knew of no venue to resolve their complaints. In surveys, two additional soldiers also stated that they did not feel they had been correctly graded in their physical profile designations, but were reluctant to discuss the matter with their commanding officers for fear of prejudicial treatment. One soldier stated that her physical profile had been changed without further physical examination. The other soldier noted that her physical profile designation was upgraded even though a medical provider had added more limitations after examining her, and she did not agree that the profile expressed all the limitations caused by her back, knee, and shoulder ailments. We reviewed the documentation in the physical profiles of these soldiers and the profiles contained requisite approving signatures, dates, and descriptions of limitations. However, our analysis did not evaluate the medical providers’ diagnoses of the medical conditions, because we are not qualified to evaluate the providers’ medical judgment. Moreover, we would not be able to determine from the documentation if the soldier did not agree with the profile, whether the profile was changed without further physical examination, or whether the medical provider or the soldier fully communicated all of the issues involved. Army personnel officials told us that they were unable to assist soldiers bringing complaints about not being evaluated by a medical board when the soldiers received a new permanent profile prior to their deployment, because the officials do not have access to soldiers’ medical information and do not have the authority to enforce time frames. These officials had also been contacted by soldiers’ family members who were concerned that the soldiers would be deployed and their conditions would worsen at deployed locations. An Army personnel official told us that soldiers sometimes questioned whether they were to be evaluated by a board prior to deployment, but by the time this official received the physical profile to initiate an MMRB, the soldiers had already been deployed. Because the officials do not have access to all medical information, they would not be able to verify whether soldiers’ profiles were approved. These situations may be occurring because physical profiles are not being distributed in a timely manner. Also, because Army personnel officials do not have the authority to enforce time frames, they could not compel commanders to provide timely input for the approval of the profile or compel designated approving authorities to distribute the approved profiles. Thus, although Army personnel officials may believe that physical profiles are not being delivered in a timely manner, they do not have the ability to resolve these soldiers’ complaints. Issues regarding proper medical evaluation of soldiers prior to deployment could be resolved by having a designated point of contact to whom soldiers and family members can bring their concerns. Such a point person would require access to the soldier’s medical information and the ability to resolve any problems and questions about a soldier’s medical readiness. This person would also need to work independently of the operations commander in order to prevent bias or coercion by the commander in resolving soldier issues. In September 2007, the Army Medical Command created a program to designate an ombudsman, or point of contact, available for each installation to whom soldiers can bring concerns on issues such as health care, pay, physical disability processing, and transition to the Veterans Administration. The Army memorandum establishing this program states that ombudsmen will resolve complaints, assist in obtaining accurate information, and act as advocates specifically for soldiers assigned to the Warrior Transition Unit and their families. According to ombudsmen at Forts Benning, Stewart, and Drum, they may also provide support for any soldier or family member of a soldier who needs assistance, through walk- ins or through the Army Wounded Soldier and Family Hotline. In accordance with the memorandum, the ombudsman will be independent from commanders at the installation, and will work closely with the Medical Assistance Group, which is part of the Army Medical Command under the Army Surgeon General’s leadership at Fort Sam Houston, Texas. However, the ombudsman program is not broadly publicized as a resource for active duty soldiers with medical conditions or their family members. We were not able to fully evaluate how effectively the ombudsman program would be able to resolve the issues brought by deploying soldiers as opposed to soldiers in the Warrior Transition Unit and their family members, as the ombudsman program has only recently been implemented. It was not fully implemented at the time of our review at Forts Benning, Stewart, and Drum. Ensuring that soldiers who are not part of the Warrior Transition Unit and their family members are aware of and have access to the ombudsman program may help to alleviate some of these concerns brought forth by deploying soldiers. As a result of the various medical record deficiencies and discretionary profile revisions discussed, commanders’ visibility over their soldiers’ potential medical conditions cannot be ensured. Furthermore, without a well-publicized ombudsman program, soldiers preparing for deployment cannot be assured of having the opportunity to air and resolve their medical concerns. Based our review of medical records from Forts Benning, Stewart, and Drum, we estimate that about 10 percent of active duty soldiers with profiles indicating medical conditions that could require significant limitations in duty assignments were deployed to Iraq and Afghanistan. Although Army guidance allows for the deployment of soldiers with medical conditions, it requires commanders to assign soldiers to duties that are suitable to their limitations. Because of the low response rate to our survey, we were unable to determine the extent to which these soldiers were in fact assigned duties suitable to their medical conditions. From the limited responses to our survey and from interviews with soldiers, most reported that they were able to accomplish most of their duties, although they were sometimes required to perform duties exceeding their medical limitations. We reviewed 685 medical records taken from a random projectable sample of active component soldiers who were preparing for deployment between April 2006 and March 2007 from Forts Benning and Stewart, in Georgia, and Fort Drum, in New York. From these installations, we estimate that 86 percent of soldiers, did not have profiles indicating medical conditions that could require significant limitations in duty assignments. We estimate that 14 percent of soldiers preparing to deploy from Forts Benning, Stewart, and Drum had profiles indicating conditions that could require significant limitations: specifically, soldiers with physical profile designations of 3 or 4, or who indicated that they could not perform certain functional activities. Figure 3 shows the total number of records reviewed and the estimated percentage (and confidence intervals) of soldiers who had medical impairments that could require significant limitations by installation from Forts Benning, Stewart, and Drum. As shown in figure 4, of the estimated 14 percent of soldiers preparing to deploy from Forts Benning, Stewart, and Drum who had medical conditions that could require significant limitations in duty assignment, approximately two-thirds—about an estimated 10 percent of the total number of soldiers—were deployed to Iraq or Afghanistan. These soldiers with medical conditions included soldiers having a physical profile designation of at least a 3, or indicating that they could not perform certain functional activities. The remaining estimated 4 percent of soldiers with medical conditions that could require significant limitations did not deploy. Soldiers in the sample who deployed with medical conditions that could require significant limitations had conditions such as herniated discs, back pain, chronic knee pain, type 2 diabetes, or mild asthma. A soldier might have a physical profile that indicates multiple medical limitations that fall under different categories. Table 2 shows that of the 66 deployed soldiers who had medical conditions that could require significant limitations, 55 percent deployed with defects of the lower extremities (under the “L” category). For example, one soldier’s physical profile showed chronic hip pain that restricted physical training pace and limited the soldier to lifting no more than 48 pounds. Medical conditions of the eyes and psychiatric conditions had the lowest rates of occurrence. While we did not review documentation of medical limitations other than the soldiers’ physical profiles, according to Army medical officials, mental health conditions are not generally documented in physical profiles unless the conditions limited a soldier’s ability to accomplish his or her duty. Commanders were also notified of a soldier’s mental condition by medical providers if commanders requested the mental health evaluation of the soldier. We were unable to determine the extent to which deployed soldiers in the sample with medical conditions were assigned duties suitable to their limitations. While Army guidance requires commanders to assign soldiers to duties that are suitable to their medical conditions, it does not require that they track the assignments of their soldiers to duties that accommodate their limitations. In order to determine the extent to which they had been assigned to duties suitable for those conditions, we surveyed by e-mail a sample of deployed soldiers with medical conditions. In our survey, we asked these soldiers for information on their ability to perform the duties to which they were assigned. However, we did not get a sufficiently high response rate to enable us to project findings from the survey respondents. We sent the survey to 66 soldiers, but received responses from only 24. Of the 24 soldiers who responded, 19 reported that they were able to complete most or all of their duties, and 22 of the 24 said they wanted to deploy with their units. None said that they could perform only a few or none of their duties. However, 5 of the soldiers we surveyed indicated that they were able to perform only some of their duties. Survey responses indicated that some soldiers had experienced job reassignments to accommodate the limitations of their medical conditions. For example, one soldier had a shoulder injury that limited his ability to wear all of his body armor. When his unit was deployed to Iraq, he was assigned to duties in Kuwait so that he would not have to wear all of his body armor. Another soldier with a hearing deficit had his occupational category changed from infantry to supply specialist to protect him from exposure to loud noise. One soldier had degenerative disc disease, with lower back and leg pain, and his commander reassigned him from being leader of his unit to base security to accommodate his medical condition by limiting the time he had to wear his equipment. However, three of our survey respondents reported that their duties or occupational categories were not changed, although they believed they should have been. For example, one soldier often fell asleep during guard duty because his sleep apnea treatment was impaired by the irregularity of electric power availability, which he needed to support his continuous positive airway pressure machine. Although we were unable to speak with the commanders of the particular soldiers surveyed in the sample, we spoke with other commanders at Forts Benning, Stewart, and Drum to help explain these survey responses. These commanders reported that they were aware of the medical conditions of the soldiers with whom they had deployed and that they always considered these conditions in their duty assignments. Army commanders told us that soldiers with medical impairments may on occasion be required to perform job duties exceeding their limitations because they have special skills that are hard to replace using other personnel. Commanders may also sometimes assign soldiers to duties exceeding their limitations because they are unaware of the extent of the limitations, as soldiers’ physical profiles may not reflect all of their medical information. Furthermore, according to both soldiers and senior medical officials whom we interviewed, soldiers may conceal the extent of their medical limitations or may negotiate with medical providers in order to remain with their units or in the Army. For example, one soldier did not agree with the upgrading of her physical profile designation, but also did not want to fully disclose her medical condition for fear of not meeting Army medical standards. Two soldiers stated that they agreed with their physical profile designation, which masks the severity of their limitations, and they were deployed although their medical condition was progressively worsening while at deployed locations. In both these cases, the soldiers stated that they were nearing retirement and did not want to be discharged from the Army due to a medical board evaluation before they were eligible to receive their full retirement pensions, and they confirmed that their commanders accommodated their medical conditions. Conversely, Army officials have stated that soldiers may overstate their medical conditions in order to avoid deployment and they must take into account their other experiences with the soldiers’ limitations when evaluating their medical deployability. For example, one commander told us that one soldier brought up a foot injury to delay her deployment, although it was diagnosed by a medical provider outside the military and it was not in her military medical record. The commander allowed the soldier time to recuperate and allowed her to purchase a specific type of boot to accommodate her injury. However, when the soldier did not purchase the boots in a timely manner in order to further delay her deployment, the commander found the boots at a nearby supply store and deployed the soldier into theater. Although we were not able to determine the extent to which Army commanders have assigned soldiers to duties that are suitable for their limitations, there may be soldiers who had proper evaluations performed prior to deployment yet still have concerns about the suitability of their assigned duties. Soldiers should have access to a program at deployed locations that is similar to the ombudsman program available at Army installations. The soldiers who have medical conditions that develop or worsen while at deployed locations and may not believe they are assigned to appropriate duties should have access to a contact person who can address their concerns. This person should have access to the soldier’s medical information and the authority to resolve any problems, and he or she should work independently from the soldier’s commander. Long-standing issues regarding the medical deployability of servicemembers have become increasingly important as the Global War on Terrorism continues and large numbers of servicemembers are deployed. The Army is hampered by its lack of an enforcement mechanism from ensuring that soldiers’ MMRB or MEB evaluations are conducted within prescribed time frames and not delayed by the failure of commanders or medical providers to provide required information on time. Of the 6 percent of soldiers from Forts Benning, Stewart, and Drum that we estimate were deployed with medical conditions that required further evaluation by a MMRB or MEB, we estimate that 3 percent of these soldiers did not receive these needed evaluations prior to deployment. Furthermore, the commanders and medical providers who must make medical readiness and deployment decisions about soldiers do not always have full visibility over the soldiers’ medical limitations because physical profile documentation is not always properly retained or complete. The Army intends to establish centralized electronic documentation and distribution of physical profiles to improve visibility, but it has not finalized plans for needed actions, associated milestones, and timeliness of the process. Without timely MMRB or MEB evaluations and the retention of complete physical profile information for deploying soldiers with medical conditions, commanders who assign duties can not be fully informed of soldiers’ medical limitations. We did not find widespread cases of improper duty assignments for deployed soldiers with medical conditions; however, the weaknesses in the Army procedures could permit this to occur. Although the Army ombudsman program may help alleviate concerns from soldiers and family members, they should be made aware of the program and the program should be made available for soldiers prior to and during deployment. Unless soldiers have been fully evaluated, have an independent contact person to promote their concerns, and commanders have full knowledge of the soldiers’ limitations, the Army cannot safeguard soldiers with medical conditions from being deployed and assigned to duties unsuitable for their limitations. To safeguard soldiers with significant medical limitations from being deployed and assigned to duties unsuitable for their limitations, we recommend that the Secretary of the Army: 1. direct the Office of the Army Surgeon General and the Army Deputy Chief of Staff G-1 to collaboratively develop an enforcement mechanism to ensure that medical providers and commanders follow procedures so that soldiers whose permanent physical profiles indicate significant medical limitations are properly referred to and complete MEB and MMRB evaluation boards prior to deployment; 2. direct the Office of the Army Surgeon General and the Army Deputy Chief of Staff G-1 to move forward with plans to electronically process and retain physical profiles, including specific actions and milestones, and to implement guidance to help ensure the timely distribution of profiles to commanders and the military personnel office and that the medical record keeping system include all information in the approved physical profiles, and that all profiles be retained in soldiers’ medical records; 3. direct the Army Human Resources Command to disseminate information and provide soldiers and their families access to an independent ombudsman program prior to and during deployment to ensure that they are fully informed about this resource for addressing their concerns and to add independent oversight of Army medical and deployment processes in the interests of the soldiers. DOD provided written comments on a draft of this report and concurred with each of our recommendations. In commenting on our first recommendation, DOD stated that our findings do not suggest the existence of a widespread problem throughout the Army, as the number of soldiers in our sample deployed without appearing before a medical evaluation board was 17; and furthermore, that survey and interview responses indicate that commanders appear to be assigning soldiers with medical limitations to suitable duties. However, we note that the 17 soldiers who deployed without receiving proper board evaluations represent a sizeable proportion of the 42 soldiers in our sample who should have received such a review prior to deployment. These 17 soldiers, furthermore, can be projected from our sample to represent approximately 3 percent of the soldiers who were preparing for deployment at the three installations; we are providing further clarification regarding this figure in the body of this report. Furthermore, as we have noted in our report, ad hoc measures to assign soldiers to suitable duties are not as reliable as an enforcement mechanism for ensuring that soldiers are so assigned. While we could not determine the number of soldiers who may have been assigned to unsuitable duties, as the Army does not track this information and our survey responses were limited, neither could we confirm that soldiers with medical limitations were consistently assigned to suitable duties. DOD noted that it had actions planned or underway to conduct a thorough inspection of the policies and procedures supporting a commander’s determination of soldier deployability, and to release new guidance regarding medical conditions that should preclude affected servicemembers from deployment, along with other initiatives, and we commend these efforts. In commenting on our second recommendation, DOD stated that the Office of the Army Surgeon General has identified and submitted requirements for the automation of physical profiles, beginning development by the end of 2008, and we commend this planned initiative. We note that it is important for these plans to have specific actions and milestones, and for the Army to implement guidance to ensure timely distribution of profiles to commanders and military personnel officials through the automated system. In commenting on our third recommendation, DOD stated that two programs, the Army Ombudsman Program and the Wounded Soldier and Family Hotline, are available to assist all soldiers (and their families) whether preparing to deploy, deployed, or redeploying. However, we note that the Wounded Soldier and Family Hotline does not constitute a resource independent of the command. Although DOD states that retribution is not tolerated against those using the hotline, we maintain our view that soldiers should be able to turn to a resource independent of the command. With regard to the Ombudsman Program, though it is independent of the command, we continue to assert our view that broad advertisement is needed for soldiers and their families to be made aware of this resource for those soldiers not only returning from deployment, but also prior to and during deployment. The Army’s comments are reprinted in appendix VI. In addition, the Army provided technical comments, which we have incorporated as appropriate. We are sending copies of this report to interested congressional committees; the Secretary of Defense; the Secretaries of the Army, the Navy, and the Air Force; and the Commandant of the Marine Corps. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-3604 or farrellb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VII. To address the extent to which the Army is adhering to its medical and deployment requirements regarding decisions to send soldiers with medical limitations to Iraq and Afghanistan, we reviewed relevant DOD and Army guidance related to medical standards and deployment procedures. We discussed the deployment of servicemembers with medical conditions with a variety of officials from the Office of the Assistant Secretary of Defense for Health Affairs, the Department of the Army, and the Office of the Army Surgeon General. As agreed with congressional staff, we also met with the Offices of the Air Force and Navy Surgeons General as well as the Navy Bureau of Medicine and Surgery to gain an understanding of those services’ guidance on medical standards and deployment procedures. In December 2007, we provided a briefing to congressional staff that included a discussion of these services’ guidance regarding deployment of servicemembers with medical conditions. In addition, we reviewed Army guidance covering documentation of soldiers’ medical limitations prior to deployment and conditions under which soldiers with medical conditions are considered deployable. We reviewed a sample of medical records and interviewed medical providers, Army commanders, and soldiers at selected installations to identify and evaluate installation procedures for documenting medical limitations and training provided regarding this issue at each installation. We selected three Army installations—Fort Benning, Fort Stewart, and Fort Drum. We selected Fort Stewart and Fort Drum based on the number of active component soldiers deployed from each installation to Iraq or Afghanistan between April 1, 2006, and March 31, 2007; and we selected Fort Benning based on initial allegations of active component soldiers being deployed with significant medical limitations from this installation. For our medical records review, we selected random samples of active component soldiers at Fort Benning, Fort Stewart, and Fort Drum. In order to create the sample, we used the universe of soldiers from each installation who were preparing for deployment from April 1, 2006, to March 31, 2007, to Iraq or Afghanistan and answered “yes” to question number 3 on the pre-deployment health assessment (form DD 2795) which asks, “Are you currently on a profile, or light duty, or are you undergoing a medical board?” Our statistical samples are representative of soldiers at these installations who meet our eligibility criteria. Those who did not complete a pre-deployment health assessment during this time frame had no chance of being selected. Of the soldiers preparing to deploy, soldiers may have their deployment delayed or may ultimately not be deployed for various reasons, such as not completing required training and not having proper security clearances for deployment, as well as not meeting medical readiness standards. For various reasons, medical records were not always available for review. Therefore, we reviewed more medical records than our target sample size on the assumption we might not meet our desired precision. Specifically, there were seven reasons identified for not being able to physically secure soldiers’ medical records for review: 1. Charged to patient. When a patient visits a clinic (on-post or off- post), the medical record is physically given to the patient. The procedure is that the medical record will be returned by the patient following their clinic visit. 2. Charged out to Medical Evaluation Board. Soldier is in the process of a medical review board and their medical record is retained by the board members. 3. Charged out to Physical Evaluation Board. Soldier is in the process of a physical review board and their medical record is retained by the board members. 4. Expired term of service. Soldier separates from the Army and their medical record is sent to the Veterans Administration Records Management Center St. Louis, Missouri. 5. Record is missing and not accounted for by the medical records department. No tracking sheet is in the file system to indicate the patient has checked it out or otherwise. 6. Permanent change of station. Soldier is still in the Army, but has transferred to another installation. The medical record was sent to the new installation with the soldier. The sample size for our medical record review was determined to provide a 95 percent confidence interval for an attribute measure with a precision of at least 5 percent. Because we followed a probability procedure based on random selections, the sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval (e.g., plus or minus 5 percentage points). This is the interval that would contain the actual population value for 5 percent of the samples we could have drawn. As a result, we are 95 percent confident that each of the confidence intervals will include the true values in the study population. At two of the three installations we visited, we reviewed more records than needed to meet our target sample size because medical officials made available more medical records than our targeted sample amount. The number of soldiers in the samples and the total records reviewed of soldiers at the installations visited are shown in table 3. At each location, we examined medical documentation for evidence of physical profiles (form DA 3349) that were created between April 2001 and March 2007. We selected this time frame because it would include any profile in effect when a soldier in the sample deployed. We reviewed both hard copy soldier medical records for evidence of physical profiles as well as any profiles located in Armed Forces Health Longitudinal Technology Application (AHLTA), the department of defense’s electronic medical record. In addition, we requested that installation medical personnel provide any information on profiles from the Army’s Medical Protection System (MEDPROS) for each of the soldiers in the sample to ensure that our review of medical records was complete and that we identified all physical profiles. Even though MEDPROS is not an official medical record, it is used in the determination of medical readiness in preparation for deployment and contains medical limitation information and dates of physical profiles. After gathering all physical profiles, we reviewed them for completeness, and analyzed them to determine if they were completed in accordance with Army guidance. From the soldiers that received a physical profile between April 2001 and March 2007, we identified the subset of soldiers with medical conditions that may require significant medical limitations, specifically soldiers with permanent or temporary profile designation of at least a 3, or a designation of 2 showing inability to do certain functional activities. We did not review documentation of medical limitations other than the physical profiles. According to Army officials, mental health conditions are not generally documented in physical profiles unless the conditions limited a soldier’s ability to accomplish his or her duty. Commanders were also notified of their soldiers’ mental conditions by medical providers if they requested a mental health evaluation of the soldiers. Although we have taken many steps to ensure accurate data analysis of active component soldiers with a physical profile, previous GAO reviews have found that Army medical records do not contain all medical documentation as required, thus, our review may not encompass the full extent of soldiers with physical profiles. To determine the extent to which the Army is deploying soldiers to Iraq and Afghanistan with medical conditions requiring duty limitations, and whether it is assigning them to duties suitable to their limitations, we requested deployment data on the subset of soldiers who we identified as having a significant medical limitation from the time period of April 2001 to March 2007. We then compared data from our medical record review at Forts Stewart, Benning, and Drum to deployment data for soldiers in the sample provided by Army officials to identify soldiers with a medical condition that may require significant limitations who had deployed to Iraq or Afghanistan. We reviewed Army processes for tracking soldiers while deployed. We interviewed Army officials including commanders and medical providers about established procedures in place to ensure soldiers are assigned within their limitations. We also surveyed by e-mail 66 soldiers we identified who had deployed with medical conditions to Iraq and Afghanistan. We received responses from 24 of these soldiers, for a response rate of about 36 percent. These responses do not allow us to project the extent to which deployed soldiers with medical conditions across the Army were assigned to duties suitable to their medical limitations in Iraq and Afghanistan; nevertheless, we present the information we obtained to illustrate these issues. We conducted this performance audit from April 2007 through April 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Army Physical Profile (DA Form 3349) Appendix III: PULHES Definitions Normally includes conditions of the heart; respiratory system; gastrointestinal system, genitourinary system; nervous system; allergic, endocrine, metabolic and nutritional diseases; diseases of the blood and blood forming tissues; dental conditions; diseases of the breast, and other organic defects and diseases that do not fall under other specific factors of the system. Concerns the hands, arms, shoulder girdle, and upper spine (cervical, thoracic, and upper lumbar) in regard to strength, range of motion, and general efficiency. Refers to the feet, legs, pelvic girdle, lower back musculature and lower spine (lower lumbar and sacral) in regard to strength, range of motion, and general efficiency. Relates to auditory acuity and disease and defects of the ear. Centers on visual acuity and diseases and defects of the eye. Concerns personality, emotional stability, and psychiatric diseases. Indicates a high level of medical fitness. Refers to some medical condition or physical defect that may require some activity limitations. Signifies one or more medical conditions or physical defects that may require significant limitations. The individual should receive assignments commensurate with his or her physical capability for military duty. Indicates one or more medical conditions or physical defects of such severity that performance of military duty must be drastically limited. Medical criteria (examples) No assignment limitation. No demonstrable anatomical or physiological impairment within standards established in table 7–1. May have assignment limitations that are intended to protect against further physical damage/injury. May have minor impairments under one or more PULHES factors that disqualify for certain MOS training or assignment. Possesses impairments that limit functions or assignments. The codes listed below are for military personnel administrative purposes. Corresponding limitations are general guidelines and are not to be taken as verbatim limitations. (For example, a Soldier with a code C may not be able to run but may have no restrictions on marching or standing.) Item 3 of DA Form 3349 will contain the specific limitations. Limitations in running, marching, standing for long periods etc. Orthopedic or neurological conditions Limitations in any type of strenuous physical activity. Limitations requiring dietary restrictions preventing consumption of combat rations. Endocrine disorders–recent or repeated peptic ulcer activity–chronic gastrointestinal disease requiring dietary management. Limitations prohibiting assignment or deployment to OCONUS areas where definitive medical care is not available. Individuals who require continued medical supervision with hospitalization or frequent outpatient visits for serious illness or injury. Limitations prohibiting wearing Kevlar, LBE, lifting heavy materials required of the MOS, overhead work. Arthritis of the neck or joints of the extremities with restricted motion; disk disease; recurrent shoulder dislocation. Limitations on duty where sudden loss of consciousness would be dangerous to self or to others such as work on scaffolding, vehicle driving, or near moving machinery. Seizure disorders; other disorders producing syncopal attacks of severe vertigo, such as Ménierè’s syndrome. Medical criteria (examples) Given known handicaps associated with high frequency hearing loss similar to this, Commanders are highly recommended to make an individual risk assessment of any Soldier with hearing loss that might be tasked to perform duties that require good hearing, for example; localization and detection of friend or foe sounds, scout, point, sentry, forward listening, post/observer, radio/telephone operator, and so forth. (See DA Pam 40–501, Chapter 2–4, Combat Readiness Effects.) Hearing Protection Measures required to prevent further hearing loss. Susceptibility to acoustic trauma. 1. No exposure to noise in excess of 85 dBA (decibels measured on the A scale) or weapon firing without use of properly fitted hearing protection. Annual hearing test required. 2. Further exposure to noise is hazardous to health. No duty or assignment to noise levels in excess of 85 dBA or weapon firing (not to include firing for preparation of replacements for overseas movement qualification or annual weapons qualification with proper ear protection). Annual hearing test required. 3. No exposure to noise in excess of 85 dBA or weapon firing without use of properly fitted hearing protection. This individual is ‘deaf’ in one ear. Any permanent hearing loss in the good ear will cause a serious handicap. Annual Hearing test required. 4. Further duty requiring exposure to high intensity noise is hazardous to health. No duty or assignment to noise levels in excess of 85 dBA or weapon firing (not to include firing for overseas movement or weapon firing without use of properly ear protection). No duty requiring acute hearing. A hearing aid must be worn to meet medical fitness standards. Limitations restricting assignment to cold climates. Documented history of cold injury; vascular insufficiency; collagen disease, with vascular or skin manifestations. Limitations restricting exposure to high environmental temperature. History of heat stroke; history of skin malignancy or other chronic skin diseases that are aggravated by sunlight or high environmental temperature. Limitations restricting wearing of combat boots. Any vascular or skin condition of the feet or legs that, when aggravated by continuous wear of combat boots, tends to develop unfitting ulcers. Limitations restricting wearing or being exposed to required items necessary to perform duty (for example, Latex, wool). Established allergy to wool, latex. WAIVER granted for a disqualifying medical condition/standard for initial enlistment or appointment. The disqualifying medical condition/standard for which a waiver was granted will be documented in the Soldier’s accession medical examination. Limitation not otherwise described, to be considered individually. (Briefly define limitation in item 8.) Any significant functional assignment limitation not specifically identified elsewhere. Medical criteria (examples) Deployment. This code identifies a Soldier with restrictions on deployment. Specific restrictions are noted in the medical record. MMRB. This code identifies a Soldier with a permanent profile who has been returned to duty by an MMRB (MOS Medical Review Board.) This code identifies a Soldier who is allowed to continue in the military service with a disease, injury, or medical defect that is below medical retention standards, pursuant to a waiver of retention standards under chapter 9 or 10 of this publication, or waiver of unfit finding and continued on active duty or in active Reserve status under AR 635–40. Fit for duty. This code identifies the case of a Soldier who has been determined to be fit for duty (not entitled to separation or retirement because of physical disability) after complete processing under AR 635–40. *The Army regulation does not provide medical criteria for these codes. Appendix V: Department of Defense Pre- Deployment Health Assessment (DD Form 2795) Brenda S. Farrell, (202) 512-3604 or farrellb@gao.gov. In addition to the contact named above, Marilyn Wasleski, Assistant Director; Gina Hoffman; LaToya King; Grace Materon; Elisha Matvay; Sonya Phillips; Jeanett Reid; Norris Smith III; and Cheryl Weissman made significant contributions to the report. In addition, Terry Richardson, Carl Barden, and Steven Putansu provided guidance and assistance with design and analysis; Nicole Harms provided legal advice; Steve Fox, Marcia Crosse, and Tom Conahan advised on message preparation; and Clara Mejstrik, Adam Smith, Maria Storts, and John Wren provided assistance during medical file reviews. DOD Civilian Personnel: Medical Policies for Deployed DOD Federal Civilians and Associated Compensation for Those Deployed. GAO-07- 1235T. Washington, D.C.: September 18, 2007. Defense Health Care: Comprehensive Oversight Framework Needed to Help Ensure Effective Implementation of a Deployment Health Quality Assurance Program. GAO-07-831. Washington, D.C.: June 22, 2007. DOD Civilian Personnel: Greater Oversight and Quality Assurance Needed to Ensure Force Health Protection and Surveillance for Those Deployed. GAO-06-1085. Washington, D.C.: September 29, 2006. Military Personnel: DOD and the Services Need to Take Additional Steps to Improve Mobilization Data for the Reserve Components. GAO-06-1068. Washington, D.C.: September 20, 2006. Military Disability System: Improved Oversight Needed to Ensure Consistent and Timely Outcomes for Reserve and Active Duty Service Members. GAO-06-362. Washington, D.C.: March 31, 2006. Military Personnel: Top Management Attention Is Needed to Address Long-standing Problems with Determining Medical and Physical Fitness of the Reserve Force. GAO-06-105. Washington. D.C.: October 27, 2005. Defense Health Care: Improvements Needed in Occupational and Environmental Health Surveillance during Deployments to Address Immediate and Long-term Health Issues. GAO-05-632. Washington, D.C.: July 14, 2005. Defense Health Care: Force Health Protection and Surveillance Policy Compliance Was Mixed, but Appears Better for Recent Deployments. GAO-05-120. Washington, D.C.: November 12, 2004. Military Personnel: DOD Needs to Address Long-term Reserve Force Availability and Related Mobilization and Demobilization Issues. GAO- 04-1031. Washington, D.C.: September 15, 2004. Defense Health Care: DOD Needs to Improve Force Health Protection and Surveillance Processes. GAO-04-158T. Washington, D.C.: October 16, 2003. Defense Health Care: Quality Assurance Process Needed to Improve Force Health Protection and Surveillance. GAO-03-1041. Washington, D.C.: September 19, 2003. Military Personnel: DOD Needs More Data to Address Financial and Health Care Issues Affecting Reservists. GAO-03-1004. Washington, D.C.: September 10, 2003. Defense Health Care: Army Has Not Consistently Assessed the Health Status of Early-Deploying Reservists. GAO-03-997T. Washington, D.C.: July 9, 2003. Defense Health Care: Army Needs to Assess the Health Status of All Early-Deploying Reservists. GAO-03-437. Washington, D.C.: April 15, 2003. VA And Defense Health Care: Military Medical Surveillance Policies in Place, but Implementation Challenges Remain. GAO-02-478T. Washington, D.C.: February 27, 2002. Gulf War Illnesses: Research, Clinical Monitoring, and Medical Surveillance. GAO/T-NSIAD-98-88. Washington, D.C.: February 5, 1998. Gulf War Illnesses: Improved Monitoring of Clinical Progress and Reexamination of Research Emphasis Are Needed. GAO/NSIAD-97-163. Washington, D.C.: June 23, 1997. Defense Health Care: Medical Surveillance Improved Since Gulf War, but Mixed Results in Bosnia. GAO/NSIAD-97-136. Washington, D.C.: May 13, 1997. Reserve Forces: DOD Policies Do Not Ensure That Personnel Meet Medical and Physical Fitness Standards. GAO/NSIAD-94-36. Washington, D.C.: March 23, 1994. Operation Desert Storm: War Highlights Need to Address Problem of Nondeployable Personnel. GAO/NSIAD-92-208. Washington, D.C.: August 31, 1992. | The increasing need for warfighters for the Global War on Terrorism has meant longer and multiple deployments for soldiers. Medical readiness is essential to their performing needed duties, and an impairment that limits a soldier's capacities represents risk to the soldier, the unit, and the mission. Asked to review the Army's compliance with its guidance, GAO examined the extent to which the Army is (1) adhering to its medical and deployment requirements regarding decisions to send soldiers with medical conditions to Iraq and Afghanistan, and (2) deploying soldiers with medical conditions requiring duty limitations, and assigning them to duties suitable for their limitations. GAO reviewed Army guidance, and medical records for those preparing to deploy between April 2006 and March 2007; interviewed Army officials and commanders at Forts Benning, Stewart, and Drum, selected for their high deployment rates; and surveyed deployed soldiers with medical limitations. Army guidance allows commanders to deploy soldiers with medical conditions requiring duty limitations, subject to certain requirements, but the Army lacks enforcement mechanisms to ensure that all requirements are met, and medical record keeping problems obstruct the Army's visibility over these soldiers' conditions. A soldier diagnosed with an impairment must be given a physical profile form designating numerically the severity of the condition and, if designated 3 or higher (more severe), must be evaluated by a medical board. Commanders must then determine proper duty assignments based on soldiers' profile and commanders' staffing needs. From a random projectable sample, GAO estimates that 3 percent of soldiers from Forts Benning, Stewart, and Drum who had designations of 3 did not receive required board evaluations prior to being deployed to Iraq or Afghanistan for the period studied. In some cases, soldiers were not evaluated because commanders lacked timely access to profiles; in other cases, commanders did not take timely actions. The Army also had problems with retention and completeness of profiles; although guidance requires that approved profiles be retained in soldiers' medical records, 213 profiles were missing from the sample of 685 records reviewed. The Army was not consistent in assigning numerical designations reflecting soldiers' abilities to perform functional activities. GAO estimates from a random projectable sample that 7 percent of soldiers from these three installations had profiles indicating their inability to perform certain functional activities, yet carrying numerical designators below 3. While medical providers can "upgrade" numerical designations discretionarily based on knowledge of soldiers' conditions, the upgrades can mask limitations and cause commanders to deploy soldiers without needed board evaluations. While GAO found no evidence of widespread revision in profile designations, some soldiers interviewed or surveyed disagreed with their designations yet were reluctant to express concerns for fear of prejudicial treatment. The Army has instituted a program to provide ombudsmen to whom soldiers can bring medical concerns, but it is targeted at returning soldiers and is not well publicized as a resource for all soldiers with medical conditions. Without timely board evaluations and retention of profile information for deploying soldiers with medical conditions, the Army lacks full visibility and commanders must make medical readiness, deployment, and duty assignment decisions without being fully informed of soldiers' medical limitations. GAO estimates that about 10 percent of soldiers with medical conditions that could require duty limitations were deployed from the three installations, but survey response was too limited to enable GAO to project the extent to which they were assigned to suitable duties. Along with interviews, however, responses suggest that both soldiers and commanders believe soldiers are generally assigned to duties that accommodate their medical conditions. Occasional exceptions have occurred when a profile did not reflect all necessary medical information or a soldier's special skill was difficult to replace. Officials said soldiers sometimes understate their conditions to be deployed with their units, or overstate them to avoid deployment. |
Creating effective security in the nation’s ports in the post-September 11 world is a challenging task. Ports present attractive targets for terrorists: they are sprawling, easily accessible by water and land, close to crowded metropolitan areas, and interwoven with complex transportation networks. Besides terminals where goods bound for import or export are unloaded or loaded onto vessels, ports also contain other facilities critical to the nation’s economy, such as refineries, factories, and power plants. These many facilities, along with the ships and barges that ply port waterways, can be vulnerable on many fronts. For example, container terminals, where containers are transferred between ships and railroad cars or trucks, need ways to screen vehicles and routinely check cargo for evidence of tampering. At factories and other facilities where hazardous materials are present, safeguards must be in place to prevent unauthorized persons from gaining access. Similarly, vessels ranging from oil tankers to tugboats need effective access control over critical operating areas, such as engine and control rooms. The framework for the nation’s collective response to this challenge is now found in the Maritime Transportation Security Act (MTSA), passed by the Congress in November 2002. MTSA’s implementing regulations require owners and operators of facilities and vessels to conduct assessments that will identify their security vulnerabilities and to develop security plans to mitigate these vulnerabilities. Under these regulations, these plans are to include such items as measures for access control, responses to security threats, and drills and exercises to train staff and test the plan. MTSA was enacted after the Coast Guard initially began developing the Port Security Assessment Program in the wake of the September 11 attacks. Some basic information about geographic information systems, or GIS, may be helpful in understanding this component of the Port Security Assessment Program. A GIS can be thought of as a sort of electronic map, but with many more capabilities than traditional paper mapping. For example, paper maps can provide only a static snapshot of selected entities and their locations and cannot be easily updated or changed. By contrast, information in a GIS can be easily and continually updated. In addition, because a GIS stores information on separate “layers” related to such things as roads or buildings, users can combine data layers at will, providing the capability to quickly create and view maps for specific purposes any time they are needed. Data layers in a GIS can be extremely varied. Typical types include the following: Layers describing location, ownership, and other information about real property (called cadastral data). Layers that have the characteristics of a map and image qualities of a photograph (called digital orthoimagery). Layers describing water features such as lakes, ponds, streams and rivers, canals, oceans, and coastlines (called hydrographical data). For the Coast Guard, potential GIS layers could include transportation— describing anchorages, bridges, and roadways; utilities—including power plants, power lines, and substations; and emergency response—including police and fire stations, and hospitals. Figure 1 illustrates, in a simplified way, this concept of layers and how they can be integrated. The database capabilities of a GIS allow many other kinds of information to be embedded on these data layers as well, so that the information is easily available. For example, a GIS allows the user to know not only the location of a building relative to other buildings or roads, but can also provide information such as the building’s owner, when the building was built, the building’s contents, and its dimensions and height. This ability to create maps on demand for specific purposes, with additional information at the ready, surpasses what can be done with traditional mapping approaches. One illustration of a GIS’s usefulness came in connection with efforts to recover debris from the space shuttle Columbia when it was lost in re- entering the earth’s atmosphere on February 1, 2003. Debris from the shuttle was spread over at least 41 counties in Texas and Louisiana. In Texas, a state-operated GIS provided authorities with precise maps and search grids to guide reconnaissance and collection crews in the field. Officials in charge of the effort used maps of debris fields, combined with GIS data about the physical terrain, to carefully track the pieces of debris found. Figure 2 is a map, created from debris data entered into the GIS, showing the general west-to-east track of debris data across several east Texas counties and the outer boundaries of the area in which debris was found. The Coast Guard has made significant revisions to adapt the Port Security Assessment Program to the increasing amount of security evaluations performed by port stakeholders and to address shortcomings in the program’s initial implementation. The Coast Guard initially set out to use the program as an assessment of security conditions at 55 ports. The Coast Guard and the contractor it hired to develop the assessment approach and conduct the assessments started the first assessments in August 2002, when other assessment efforts were also under way. Port stakeholders around the country had begun or already completed their own assessments of their facilities or vessels in order to identify security vulnerabilities of their assets or obtain federal assistance in strengthening their security. Even more security information was to become available as new regulatory requirements were implemented in 2003 requiring security assessments to be performed by the owners or operators of facilities and vessels operating in the nation’s ports. This changing security environment and higher-than-expected costs to complete the contractor’s initial assessments prompted the Coast Guard to revise the scope of the contractor’s assessments. Our examination of the contractor’s initial assessments identified additional shortcomings in the quality of the work and the assessment approach. In response, the Coast Guard temporarily postponed all assessment work to make further revisions, both to take advantage of the other sources of assessment information and make the assessments more useful in port security planning efforts. The revised program (1) added a GIS as a new feature and (2) tailored security assessments for particular purposes, such as synthesizing existing assessments or assessing certain infrastructure at the direction of local Coast Guard personnel. The assessments are to be completed by February 2005—but the Coast Guard is still developing its GIS and is uncertain as to when the GIS will be ready for use. The Coast Guard began the Port Security Assessment Program to assess the vulnerabilities of the nation’s most strategic commercial and military ports in the aftermath of the September 11, 2001, terrorist attacks. (See fig. 3 for a timeline of the program.) To identify which ports were most strategic, the Coast Guard considered such factors as cargo volume, import/export cargo value, volume of passenger traffic on ferries or cruise ships, population density around the port, the presence of critical infrastructure or key assets, the presence of military forces or bases, and whether the port was designated to support major military deployments. From this analysis, 55 ports out of 361 ports were chosen to be the first to receive port security assessments. In April 2002, the Coast Guard selected a contractor to perform the assessments. Under this arrangement, the contractor was responsible for developing an approach (which the Coast Guard calls “Version 1”) to assess vulnerabilities of port assets and systems such as cargo facilities, manufacturing facilities, passenger terminals, power generation and fuelling facilities, as well as other infrastructure such as public access areas and bridges. The assessment was to identify the relationships between selected assets to port systems, identify the vulnerabilities of those assets to terrorist attacks, and recommend actions to mitigate the vulnerabilities. With oversight from the Coast Guard, the contractor had primary responsibility for conducting key activities of each assessment, such as identifying which assets should be assessed, collecting data from stakeholders, making on-site visits, and analyzing the data collected. The final product was to be a comprehensive written report of the findings identified during the assessment. Primary customers for this work were the local Coast Guard Captain of the Port and port stakeholders serving on Area Maritime Security Committees, who could use it in such security planning efforts as the development of an Area Maritime Security Plan. The first assessments began in August 2002; the Coast Guard’s goal was to complete them at all 55 ports by the end of 2004. To further refine the approach before assessing “megaports” such as New York/New Jersey or Los Angeles/Long Beach, as well as to give the program a chance to build additional assessment teams to perform the work, the Coast Guard decided to try out the approach at medium-sized ports first such as San Diego and Boston. Under the time frame the Coast Guard adopted, officials expected to conduct assessments of 8 ports in 2002, 18 in 2003, and 24 in 2004. Several actions taken by port stakeholders led to substantial changes in the approach. One of these developments was that many port stakeholders were starting or completing assessments on their own. Stakeholders, such as port authorities, and owners and operators of facilities and vessels began conducting assessments in order to identify security vulnerabilities of their assets or to meet application requirements for federal grants. In some cases, initial assessments were performed shortly after the September 11, 2001, terrorist attacks and were followed by more comprehensive assessments conducted either on their own or by contractors. For example, port stakeholders such as chemical producers that were members of certain industry or trade organizations were required to complete assessments of their facilities using approved assessment methodologies as a condition of their membership in the organization. Beginning in September 2002, the Coast Guard also issued a series of suggested guidelines for port stakeholders to use in conducting security assessments and developing security plans to address any identified vulnerabilities. In addition to the assessment activities that many stakeholders voluntarily undertook after the terrorist attacks, more maritime security information became increasingly available as the Maritime Transportation Security Act began to be implemented. Enacted in November 2002, MTSA mandated major changes in the nation’s approach to maritime security and called for a comprehensive framework that includes planning, personnel security, and careful monitoring of vessels and cargo. The regulations implementing MTSA required owners or operators of specific facilities and vessels in the nation’s ports to conduct assessments and develop plans to address vulnerabilities. These security assessments and plans were to be reviewed and approved by the Coast Guard prior to July 1, 2004. As a result, facilities and vessels that had not already completed a security assessment were now required to do so, thereby increasing the amount of assessment information available from port stakeholders at the 55 ports as July 1, 2004, drew nearer. Coupled with the changes in the amount of information to be generated by others, high costs for the first set of assessments prompted the Coast Guard to begin reassessing the Version 1 approach for conducting the assessments. According to the Coast Guard, assessments for the first 8 ports cost nearly three times more than was originally expected, exceeding $1 million per port. To address this issue, the Coast Guard made changes in the assessment approach, including greater emphasis on discussions early on in the assessment process with local Coast Guard Captains of the Port in order to better focus on the facilities and infrastructure needing to be assessed and the adoption of a standardized report outline and format to reduce redundancy. The Coast Guard decided to pilot-test this new approach, which the Coast Guard now calls “Version 2,” at two ports in the summer of 2003. As this new approach was being readied, our own review of the contractor’s assessments disclosed additional shortcomings. In a September 2003 testimony before the Senate Committee on Commerce, Science, and Transportation, we expressed concern about how the assessment program was being implemented. In talking with some port stakeholders who participated in the assessment, we found that many of them saw little usefulness in the assessments beyond what they already knew about their vulnerabilities from previously completed assessments. Some key port stakeholders declined to participate in the assessment after receiving lengthy questionnaires from the contractor asking for information stakeholders considered proprietary. Port stakeholders also said they had not been given the opportunity to review or comment on the draft assessment report, which contained errors and inaccuracies. Finally, the contractor was moving to use the Version 2 approach in the next set of assessments before the lessons learned from the pilot tests could be identified and incorporated into the assessment approach. We shared our findings with Coast Guard officials and suggested that the assessment approach be further revised. In addition to giving the Captains of the Port and Coast Guard personnel a larger role in identifying the critical assets to be assessed, we suggested that the Coast Guard reduce duplication and lessen the burden on stakeholders by doing more to take into account already-completed assessments of facilities and assets. The Coast Guard agreed and postponed conducting more assessments until additional changes to address these deficiencies were made. While considering what changes needed to be made to the assessment program, the Coast Guard also determined that it was essential to provide local Coast Guard officials and certain members of the local Area Maritime Security Committee a means to retrieve maritime security information and display it for planning and response purposes at the ports. Although a significant amount of security information was now available, it was kept in disparate locations and was not readily available. With the regulations implementing MTSA requiring Captains of the Port and Area Maritime Security Committees to develop portwide Area Maritime Security Plans, access to the available security information became increasingly important in order for them to carry out this responsibility and improve the protection of the marine transportation system. To provide local Coast Guard officials and certain members of the local Area Maritime Security Committee access to this information, the Coast Guard decided to incorporate a GIS as a new feature in the assessment program. At the local port level, the GIS would integrate the security information into a single electronic database that would allow the information to be retrieved and displayed within the context of a particular port area. Whereas previous assessment results were compiled into a published report that would characterize the port’s security posture at a single point in time, GIS has the capability of being updated as new information becomes available. GIS also provides a tool for visually depicting the port and for retrieving security or assessment information as needed in the development or revision of Area Maritime Security Plans. The Coast Guard believes this will benefit the Captains of the Port and the Area Maritime Security Committees to better visualize the port and enhance their ability to develop security plans as well as respond to a security incident, should one occur. In addition to the GIS component, the revised program has three other components, all related to assessments. The Coast Guard revised the assessment approach so that it would provide more specialized information about port security. The approach, known as Version 3, has three different types of assessments that collectively are aimed at providing both a synthesis of what is already known about security at a port and studies of specific topics or infrastructure that have not been fully assessed. When completed, these assessments will provide the core security information to populate the GIS. These assessment components are as follows: Assessment of Assessments—An identification and inventory of completed security assessments of port assets and critical infrastructure within a port. This inventory is designed to help the assessment team minimize the possibility of needlessly duplicating previously completed assessments as well as to provide the Captain of the Port and the Area Maritime Security Committee with greater awareness of existing security information. Terrorist Operations Assessment—An assessment utilizing the expertise of contractors comprised of former Navy Special Operations personnel to provide an outsider perspective on the ports’ vulnerabilities to a terrorist attack. This assessment is to evaluate potential terrorist targets within the ports and identify likely attack scenarios for the Captain of the Port and Area Maritime Security Committee to consider addressing in the Area Maritime Security Plan. Special Assessment—Assessment of specific port assets, infrastructure, or operations that are critical to the port but have not been previously assessed from a maritime perspective. Performed at the request of the Captain of the Port and the Area Maritime Security Committee, this assessment is to provide vulnerability, impact, and countermeasure information on those assets, infrastructure, or operations. Examples include blast impact assessments of commercial vessels, plume dispersion assessments of an attack on vessel or facility with hazardous materials, and security assessments of underwater tunnels. The Coast Guard has a more definite schedule for completing the assessments than for completing the GIS. The Coast Guard resumed assessments in March 2004 using the Version 3 approach and plans to complete assessments at the remaining ports by February 2005. For the GIS component, the Coast Guard plans to use its own GIS. Until this system is operational for port security, the Coast Guard plans to lease a commercial GIS that will enable Coast Guard staff to familiarize themselves with how a GIS works and identify their specific system needs or requirements so the Coast Guard’s GIS can be customized accordingly. Project officials chose a commercial-off-the-shelf software application, iMap, that provides the Coast Guard access to over 800 layers of data containing information related to the nation’s ports. Because the Coast Guard’s GIS is still in development for port security, when the GIS component will be made operational and available to all assessed ports is yet to be determined. The Coast Guard’s revised approach appears to provide a useful planning and response tool for port security, but the implementation of the assessment program is at higher risk because of two major problems. First, the centerpiece of the new approach, the GIS component, is being developed without several key project management steps that are critical to success in such projects. Not following these steps increases the risk that the data collected will not provide port security officials with the information they need to adequately assess, identify, and mitigate security risks. Second, for the GIS component and the program as a whole, the Coast Guard lacks a strategy that clearly defines how the program will be managed, how much it will cost, or what activities will continue over the longer term. Lack of a strategy increases the risk of cost overruns, missed deadlines, and a less-than-effective program. At the same time the Coast Guard is facing these problems, it is also conducting security assessments at individual ports using the revised approach, and for this part of the program, the results to date appear more favorable. Early indications from local Coast Guard officials at the ports where the new assessments have been performed are that these assessments are of some usefulness in current security planning activities. However, not resolving the broader planning and management issues could also affect the potential value of these assessments to fill in any remaining gaps in the Coast Guard’s awareness of the security posture in the ports. The Captains of the Port and other Coast Guard officials we talked with were in agreement in their belief that a GIS with security assessment information would greatly facilitate their security planning and response efforts. They provided such examples as the following, based on their understanding of the tools that would be available with the GIS: For planning efforts, the visual nature of the GIS would greatly enhance the Captain of the Port’s and Area Maritime Security Committee’s understanding of the connections between port facilities, assets, and infrastructure that would otherwise not be possible through paper reports. For incident response efforts, the capability of GIS to store and retrieve security information such as plans and assessments of particular assets within the port would quicken response times as the information can be immediately located and viewed. A useful, well-designed system does appear to carry great promise. For example, if Coast Guard personnel were alerted that a particular port may be targeted and that warehouses containing shipping containers were at risk, officials could quickly create a map showing the location and contents of the warehouses, ingress points located near the warehouses, and depth of the nearby waterways throughout the port. Using this information, security officials could assess the relative risk to each warehouse, prioritize actions based on the risk level, and act almost immediately to secure the most vulnerable locations. However, developing a useful GIS is a significant and complex challenge. One reason is that every port has its own unique mix of geographic characteristics and operations that must be accurately captured. For example, one port may be located along a stretch of river while another may sit next to the open ocean, one port may have a high volume of cruise ship traffic while another may have a high concentration of chemical and petroleum facilities. These different characteristics will require a GIS that is flexible enough to be of use in a variety of settings. Another reason the GIS can be challenging is that some security-related situations, such as potential terrorist activities, involve a great deal of unpredictability and the kinds of information and analyses needed to address such uncertainty are difficult to anticipate. This complexity places a premium on proper planning. Over the years, we have analyzed information technology systems across a broad range of federal programs and agencies, and these analyses have repeatedly shown that without adequate planning, the risks increase for cost overruns, schedule slippages, and systems that are not effective or usable. For example, the Bureau of Land Management (BLM) spent more than $67 million on a system that was never deployed. When the system was tested prior to deployment, it was found not to meet users’ requirements because it did not support BLM’s business activities, was too complex, and significantly impeded worker productivity. We found this system failed because it was developed without a clear understanding of requirements and without a credible project schedule with reliable milestones. In another example, the Centers for Medicare and Medicaid Services had similar problems that led to its planned Medicare Transaction System being cancelled—the project did not have fully defined and agreed-to requirements and had a flawed project schedule. These types of problems make it prudent to ensure that planning of GIS applications is adequate. Coast Guard officials indicated that they viewed the development of the port security GIS database as an add-on to existing Coast Guard information systems, not as a new database or information system. Within this context, however, it is still important to ensure that the steps being taken are likely to produce a satisfactory result. In that light, we assessed the Coast Guard’s development efforts using established best practices in the industry for developing information technology systems, including those created by the Institute of Electrical and Electronics Engineers/Electronic Industries Alliance. The Coast Guard’s current efforts do not apply these criteria in two key ways—defining what the GIS should do and establishing sufficient plans to ensure that the requirements can be successfully realized. That is, successful implementation of the Coast Guard’s port security GIS is at higher risk because the Coast Guard has not used established project management practices, including defining requirements and developing a project schedule, to oversee and guide the program. One aspect of developing any information technology system such as a GIS involves establishing and maintaining a common and unambiguous definition of functional requirements among the project team, system users, and software developer. These requirements define what the system will be expected to do for its users once it is developed and implemented. For example, one requirement could be to ensure that the system can link together specified types of geospatial data to provide the user with sufficient information. Another requirement could be to ensure that the users would be provided the capability of printing paper maps and other information found in the GIS. A third could be that the GIS be available to its users 24 hours a day, 7 days a week. Requirements such as these could be important in ensuring that the system will deliver what users need. It is critical that functional requirements are carefully defined and that they flow directly from how the organization’s day-to-day operations are or will be carried out to meet mission needs. Improperly defined or incomplete requirements have been commonly identified as a root cause for why systems fail or do not meet their cost, schedule, or performance goals. Without adequately defined requirements, significant risk exists that a system will need extensive and costly changes before it will meet the organization’s needs. The Coast Guard’s actions to develop GIS requirements are not being carried out using established practices. The Coast Guard’s approach for addressing these requirements takes three main forms: First, the Coast Guard is using the assessments being conducted at the 55 ports to identify requirements for the GIS it is developing. Second, the Coast Guard is using feedback from the experiences of local officials with the commercial-off-the-shelf software application currently in use to help determine what requirements should be included. Finally, the contractor supporting the interim GIS has been tasked to identify the GIS data layers most frequently used by the Coast Guard. However, these actions fall short of meeting best practices. First, there are indications that requirements identified during the assessment visits did not necessarily include functional requirements. Second, although tasks to identify the data layers accessed by the Coast Guard using its interim GIS solution could be used to identify requirements for the port security GIS, these tasks have not yet been completed and there is no estimate as to when the information will be available. According to Coast Guard officials, the Coast Guard intends to use an existing information system instead of building a new GIS database or information system that is exclusive to port security. However, while the Coast Guard is not developing a new system, greater planning efforts appear paramount. To the extent that the Coast Guard and other users believe they need to add new kinds of data that do not currently exist in the system, both system users and developers need to agree on how to define and capture this information so that it can be of maximum use. In addition, if the Coast Guard decides to take a more limited approach, adding few, if any, new functional requirements, it runs the risk that the system will be of only partial use. Rather than taking advantage of the powerful planning and analysis capabilities that a robust geographic information system could make available, the more limited version could only be used to develop static maps of ports and their assets. Without effectively identifying and documenting the requirements for the new potential functions and data associated with the port security portion of its GIS, the Coast Guard faces the risk that the GIS will not provide port security officials with the functionality and information they need to adequately assess, identify, and mitigate security risks. Information technology project management principles and industry best practices emphasize that a project management plan is needed to define the technical and managerial processes necessary to satisfy project requirements. The plan should include, among other activities, developing a work breakdown structure with a schedule for all of the tasks to be performed; identifying and addressing project risks, and implementing a security policy. The planning document identified by the Coast Guard does not meet these standards. According to the Port Security Assessment program manager, the Coast Guard considers the project’s Concept of Operations to be its project plan. However, the Concept of Operations, does not include important elements required in a project plan. For example: Tasks and schedules: The Concept of Operations identifies seven Port Security Assessment Program objectives, one of which is the use of a GIS, but does not identify any of the tasks or a schedule for carrying them out. It also provides a list of eight high-level activities that need to be completed during the project, but again it lists no associated implementation tasks and schedule, although it estimates that port security assessments will be completed by December 2004. Since the document was written in February 2004, the assessment completion date has already been postponed by 2 months, and the project manager is unsure if the interim GIS contract will need to be renewed next spring because he is not sure when the Coast Guard’s own port security GIS will be completed and ready for implementation. Project risk: The Concept of Operations does not address project risks. As a result of not identifying potential risks, the project has encountered unexpected problems. For example, two of the eight high-level activities identified in the Concept of Operations, scheduled to be completed in April and July 2004, encountered unexpected problems that caused delays and could hinder their eventual completion. Security Policy and Project costs: The Concept of Operations does not address security policy and provides no plan for estimating project costs. For example, we asked program officials to provide documented cost information associated with the GIS component, and while we received some information, it was not sufficient to provide a clear indication of how much the GIS component would likely cost. Creating a plan that meets these requirements is essential to ensuring that the port security assessment GIS project can be successfully completed in the estimated timeframes with the resources that are available. The Coast Guard has already encountered problems caused by lack of a reliable project schedule and risk assessment. According to Coast Guard Officials, the Coast Guard is adding to an existing system rather than building a new one. Adding to an existing system, however, does not obviate the need for careful planning. Until the Coast Guard develops a project management plan that includes a schedule and milestones, it is at increased risk that the GIS component of its port security assessment program could be inadequately managed, resulting in schedule slippages and inaccurate costs estimates. In addition, without identifying and mitigating risks and security concerns, the project could encounter unexpected issues that would need to be addressed, resulting in additional schedule and cost problems. The Coast Guard has proceeded to carry out the revised assessments of individual ports with generally favorable results. The Coast Guard resumed its assessment program using the Version 3 assessment components in March 2004 and as of August 1, 2004, had completed on-site assessments of 12 additional ports in 6 Captain of the Port zones. To provide an indication of the usefulness of these assessments, we spoke with the local Captain of the Port or other Coast Guard officials that participated in the assessment process at each of these zones. In general, all agreed that the assessments were of some usefulness. Two said that the assessments provided substantially new information that they did not previously have or consider. The other four found the completed assessment results useful by bringing an outside perspective to look at the port. They said the assessments were helpful in validating their previously completed assessments or the current awareness of the security posture within their ports. The value of these assessments could be enhanced, we believe, if the Coast Guard addressed the key management practices we have already discussed in its approach for developing its GIS. By themselves, the current assessments have value to local Coast Guard officials mostly in supplementing or validating their knowledge. However, when used with the GIS, these assessments also have potential value in helping the officials “close the loop” on information they may lack. The three assessment components involve mainly data gathering and analysis, the results of which are to be fed into the GIS. Without the GIS to integrate and organize information gathered from these and other sources, those responsible for planning security cannot as easily identify the vulnerabilities in their ports and gaps in their awareness of the security posture within their ports that need to be addressed. At all of the six ports, the Captain of the Ports or other Coast Guard officials said the value of the three assessment components would be enhanced if used in conjunction with a GIS that would be better able to visually display the entire security posture of the port rather than having to review individual hard copy assessment reports as they are now published. However, the functional requirements need to be first defined in order to effectively integrate these assessment components into the GIS. Until this planning step is taken, the value of these assessments could fail to reach their full potential. Finally, the uncertainty brought on by the lack of planning for the GIS component is reflected in a similar uncertainty for the Port Security Assessment Program as a whole. For the assessment components, future plans are unclear beyond fiscal year 2005. Once all assessment reports of the 55 strategic ports are completed—a task the Coast Guard expects to be done by February 2005— the Coast Guard currently expects the assessment of assessments component to be an ongoing effort that will be updated by Coast Guard personnel as new assessment information becomes available. It expects the special assessments and terrorist operations assessments to continue through fiscal year 2005 as ports previously assessed under earlier assessment approaches are revisited, but it has made no decision about continuing them beyond that time. Beyond fiscal year 2005, the Coast Guard is currently considering two options for what to do with the special assessment and terrorist operations assessment components of the program. The options are (1) continuing the program at other ports beyond the initial 55 or (2) conducting some recurring assessment at the 55 ports. Our discussions with Captains of the Port and Coast Guard officials surfaced mixed views of the future need for the three assessment components. One Captain whose port had been assessed under the Version 3 approach said he would like the assessment team to return to his port within 2 years, in order to assess the security measures put in place after the completion of the last assessment. By contrast, Captains for two other ports said they did not think that the team needed to return unless the critical infrastructure in their ports changed dramatically. The Coast Guard official responsible for the program said that as of July 2004, discussions were underway between program officials, other Coast Guard teams, and DHS officials as to how the program should proceed in the future to best augment port security efforts. The outcome of these discussions and future funding provided to the program will largely determine the extent to which the three assessment components continue to be implemented as part of the program. Although the GIS component will continue to be enhanced, its schedule for completion and implementation is uncertain. Thus, when the various program components—GIS and port assessments—are taken together, it is not clear what activities will be conducted over the longer term, who will do them, or how much they will cost. As the Coast Guard attempts to determine the future of the Port Security Assessment Program, it needs to ensure that the program provides maximum effectiveness to its main customers, Captains of the Port and Area Maritime Security Committees. The initial program had shortcomings that created a product of marginal value. The revised program has potential to be more useful because it intends to integrate all of the assessment information collected by the Coast Guard and other relevant security authorities and place this information in a GIS. However, the Coast Guard risks producing a system that is not as useful as it could be, because its approach lacks a defined management strategy, specific cost estimates, and a clear implementation schedule. Developing the program’s GIS component in this way is of particular concern, given the problems that have resulted when other agencies used the same approach in attempting to develop their information technology systems. And without a clear development strategy for GIS, the usefulness of the three assessment components may also be limited, because local Coast Guard officials and Area Maritime Security Committees will be less able to use them to fill the remaining gaps in their awareness of the security posture within their ports. Getting this project right is important, because the prospect of a well functioning GIS has great appeal to many Coast Guard and other port stakeholders, who believe such a tool will be of considerable help in providing effective port security. To help ensure that the revised Port Security Assessment Program provides the most effective tool possible for security planning and response, we recommend that the Secretary of Homeland Security direct the Commandant of the Coast Guard to (1) define and document the GIS functional requirements and (2) develop a long-term project plan for the GIS and the Port Security Assessment Program as a whole (including cost estimates, schedule, and management responsibilities). We provided a draft of this report to the Department of Homeland Security and the Coast Guard for their review and comment. The Coast Guard’s Marine Safety, Security And Environmental Protection Directorate generally agreed with our recommendations, including the need to finalize data types and develop a detailed work plan for adding map layers. Coast Guard officials provided a number of technical clarifications, which we incorporated where appropriate to ensure the accuracy of our report. The Coast Guard commented in detail on two aspects of our report: The Coast Guard said our report tended to overlook many of the program’s significant achievements, particularly the value of the three assessment components. The Coast Guard emphasized the progress that it had made on tailoring assessments, completing them on schedule, and reducing their cost from more than $1 million per port to about $200,000 per port. The Coast Guard also said our characterization of its GIS made it appear that the Coast Guard was developing an entirely new information technology system. The Coast Guard emphasized that its GIS was part of an existing information technology system. Regarding these concerns, we would make the following points: First, the amount of emphasis the report places on GIS reflects our review of Coast Guard documents and interviews with numerous local Coast Guard officials, which showed that when compared with the three assessment components, the GIS had the potential to provide substantially more value. The program’s Concept of Operations contains multiple references to the critical and central role the GIS component will hold in providing a dynamic tool to its users (Captains of the Port and Area Maritime Security Committees) for port security planning and response. Further, the end users we talked with expressed near unanimous need for a dynamic GIS planning and response tool to increase maritime domain awareness. Second, we acknowledge that the Coast Guard’s GIS is part of a pre- existing information technology system. In our view, however, this is not the key point. The point is the need for GIS planning and functional requirements. When we assessed the Coast Guard’s development efforts against established industry best practices for developing information technology systems, we found the Coast Guard’s current efforts do not apply two key practices: defining what the GIS system should do and establishing plans sufficient to ensure that the functional requirements can be successfully realized. Our past work has shown that when other agencies tried to develop systems without these practices, problems resulted. In short, without adequate planning, we believe that the GIS— and with it, the Port Security Assessment Program—is at risk of foundering. Hence, the aim of our recommendation is to produce a more effective GIS tool for port security officials. If the Coast Guard does establish functional requirements and a clear strategy for its GIS, the system will more likely meet its potential, and port security officials will be more likely to use it effectively. We are sending copies of this report to relevant congressional committees and subcommittees, the Secretary of Homeland Security, the Commandant of the Coast Guard, and other interested parties. If you or your staffs have any questions about this report, please contact me at (415) 904-2200 or at wrightsonm@gao.gov or Steve Calvo, Assistant Director, at (206) 287-4800 or at calvos@gao.gov. Key contributors to this report are listed in appendix II. This report will also be available at no charge on the GAO Web site at http://www.gao.gov. Our two objectives for this report were to (1) discuss why and how the Port Security Assessment Program has changed over time and (2) assess the Coast Guard’s approach for implementing the Port Security Assessment Program as it is currently configured To address why and how the assessment program changed, we reviewed Coast Guard documents, interviewed officials at Coast Guard headquarters responsible for implementing the program, and visited three ports that had been assessed under the previous program assessment approach. At these ports, we interviewed local Coast Guard personnel as well numerous stakeholders to determine their views about how the assessment process was carried out. These stakeholders included, for example, operators of container terminals, power plants, cruise ship terminals, port authorities, and chemical facilities. We also relied on our previous work related to the program. For background information on the role of the geographic information system (GIS) as a tool for planning and response, we identified city and state government agencies that have GIS’s in place and talked with GIS managers and experts from these agencies. We also met with federal government GIS experts who had experience with implementing GIS within the federal environment. They included experts from the Federal Emergency Management Agency, Bureau of Customs and Border Patrol, and United States Geological Survey. Finally, we met with GIS experts at universities and elsewhere to further our understanding. To assess the Coast Guard’s approach for implementing the Port Security Assessment Program in its current form, we interviewed a variety of Coast Guard and other officials. For GIS, we interviewed the Coast Guard’s GIS Program Manager and others to determine the progress made to date. For the assessment portion of the program, we interviewed Coast Guard officials from the six Captain of the Port zones that are responsible for the security of the 12 ports assessed under the most recent program approach. To establish criteria for assessing the program’s current approach, we reviewed Coast Guard documents. We also reviewed information and documentation related to GIS applications and identified industry best practices for information systems acquisition and development to determine criteria for managing such a project. We reviewed documentation of the Coast Guard’s efforts to modify its port security GIS to determine whether the progress made met the criteria we established. In conducting our assessment, we also relied upon our work on the development of major information technology systems throughout the federal government. Our work, which was conducted from June 2003 through August 2004, was done in accordance with generally accepted government auditing standards. In addition to those named above, Chuck Bausell, Jason Berman, Christopher Hatscher, Nicholas Larson, Elizabeth Roach, and Stan Stenersen made key contributions to this report. | Created in the wake of the September 11, 2001, terrorist attacks, the Port Security Assessment Program was designed to evaluate security at the nation's 55 most economically and militarily strategic ports. Implemented by the U.S. Coast Guard, an agency of the Department of Homeland Security, the program focuses on identifying vulnerabilities, suggesting approaches to minimize them, and making the information available to those responsible for developing and implementing portwide security plans. The program has been under way for more than 2 years and has undergone several sets of changes, including the addition of a geographic information system (GIS). GAO was asked to discuss why and how the program changed and assess the Coast Guard's approach for implementing the program in its current form. Changes in the Port Security Assessment Program reflect attempts to deal with two main developments since the program's inception: evolving assessment needs at the ports and missteps in how the initial assessments were carried out. The program was designed as a comprehensive assessment of each port and its critical assets, such as passenger terminals, factories, cargo facilities, and bridges. However, the need for comprehensive assessments was diminished when many owners and operators of these critical assets began conducting their own assessments to comply with new regulatory requirements or apply for security grants. The program's assessments also proved more expensive than expected, and a GAO review conducted at the time found shortcomings in their quality and usefulness. The current program's assessments are more targeted in scope and nature, including the opportunity for local Coast Guard officials to request reviews of specific assets they do not know enough about. To help local authorities with security planning and response, the Coast Guard decided to incorporate a GIS. A GIS is a computer mapping system designed to have many information "layers" that can be easily updated and retrieved. The Coast Guard expects to complete the assessments at the 55 ports by February 2005, but no timeline exists for making the GIS component operational. Although the revised program holds promise, the implementation approach is at increased risk because the Coast Guard is not taking sufficient steps in the planning process. Contrary to best practices for technology systems development, the GIS is being developed without sufficient up-front work to identify how the system will be expected to perform. Both the GIS component and the program as a whole also lack a project plan detailing tasks, schedules, and costs. In other federal agencies, GAO has identified similar projects that failed when such steps were not followed. The initial response of local Coast Guard officials to the new, targeted assessments is generally positive. However, the assessments could be of greater benefit if functional requirements for the GIS were more clearly defined, so the Coast Guard could use the assessments to address gaps in security knowledge. |
Energy commodities are bought and sold on both the physical and financial markets. The physical market includes the spot market where products such as crude oil or gasoline are bought and sold for immediate or near-term delivery by producers, wholesalers, and retailers. Spot transactions take place between commercial participants for a particular energy product for immediate delivery at a specific location. For example, the U.S. spot market for West Texas Intermediate (WTI) crude oil is the pipeline hub near Cushing, Oklahoma, while a major spot market for natural gas operates at the Henry Hub near Erath, Louisiana. The prices set in the specific spot markets provide a reference point that buyers and sellers use to set the price for other types of the commodity traded at other locations. In addition to the cash markets, derivatives based on energy commodities are traded in financial markets. The value of the derivative contract depends on the performance of the underlying asset—for example, crude oil or natural gas. Derivatives include futures, options, and swaps. Energy futures include standardized exchange-traded contracts for future delivery of a specific crude oil, heating oil, natural gas, or gasoline product at a particular spot market location. An exchange designated by CFTC as a contract market standardizes the contracts, which participants cannot modify. The owner of an energy futures contract is obligated to buy or sell the commodity at a specified price and future date. However, the contractual obligation may be removed at any time before the contract expiration date if the owner sells or purchases other contracts with terms that offset the original contract. In practice, most futures contracts on NYMEX are liquidated via offset, so that physical delivery of the underlying commodity is relatively rare. Options give the purchaser the right, but not the obligation, to buy or sell a specific quantity of a commodity or financial asset at a designated price. Swaps are privately negotiated contracts that involve an ongoing exchange of one or more assets, liabilities, or payments for a specified time period. Like futures, options can be traded on an exchange designated by CFTC as a contract market. Both swaps and options can be traded off-exchange if the transactions involve qualifying commodities and the participants satisfy statutory requirements. Options and futures are used to buy and sell a wide range of energy, agricultural, financial, and other commodities for future delivery. Market participants use futures markets to offset the risk caused by changes in prices, to discover commodity prices, and to speculate on price changes. Some buyers and sellers of energy commodities in the physical markets trade in futures contracts to offset, or “hedge,” the risks they face from price changes in the physical market. Exempt commercial markets and OTC derivatives can serve the same function. Price risk is an important concern for buyers and sellers of energy commodities, because wide fluctuations in cash market prices introduce uncertainty for producers, distributors, and consumers of commodities and make investment planning, budgeting, and forecasting more difficult. To manage price risk, market participants may shift it to others more willing to assume the risk or to those having different risk situations. For example, if a petroleum refiner wants to lower its risk of losing money because of price volatility, it could lock in a price by selling futures contracts to deliver the gasoline in 6 months at a guaranteed price. Without futures contracts to manage risk, producers, refiners, and others would likely face greater uncertainty. The futures market also helps buyers and sellers determine, or “discover,” the price of commodities in the physical markets, thus linking the two markets together. Price discovery is facilitated when (1) participants have current information about the fundamental market forces of supply and demand, (2) large numbers of participants are active in the market, and (3) the market is transparent. Market participants monitor and analyze a myriad of information on the factors that currently affect and that they expect to affect the supply of and demand for energy commodities. With that information, participants buy or sell an energy commodity contract at the price they believe the commodity will sell for on the delivery date. The futures market, in effect, distills the diverse views of market participants into a single price. In turn, buyers and sellers of physical commodities may consider those predictions about future prices, among other factors, when setting prices on the spot and retail markets. Other participants, such as investment banks and hedge funds, which do not have a commercial interest in the underlying commodities, use the futures market strictly for profit. These speculators provide liquidity to the market but also take on risks that other participants, such as hedgers, seek to avoid. In addition, arbitrageurs attempt to make a profit by simultaneously entering into several transactions in multiple markets in an effort to benefit from price discrepancies across these markets. Both derivatives and physical markets experienced a substantial amount of change from 2002 through 2006. These changes have been occurring simultaneously, and the specific effect of any one of these changes on energy prices is unclear. Several recent trends in the futures markets have raised concerns among some market observers that these conditions may have contributed to higher physical energy prices. Specifically from January 2002 to July 2006, the futures markets experienced higher prices, relatively higher volatility, increased trading volume, and growth in some types of traders. During this period, monthly average spot prices for crude oil, gasoline, and heating oil increased by over 200 percent, and natural gas spot prices increased by over 140 percent. At the same time that spot prices were increasing, the futures prices for these commodities showed a similar pattern, with a sharp and sustained increase. For example, the price of crude oil futures increased from an average of $22 per barrel in January 2002 to $74 in July 2006. At the same time, the annual historical volatilities—measured using the relative change in daily prices of energy futures—between 2000 and 2006 generally were above or near their long-term averages, although crude oil and heating oil declined below the average and gasoline declined slightly at the end of that period. We also found that the annual volatility of natural gas fluctuated more widely than that of the other three commodities and increased in 2006 even though prices largely declined from the levels reached in 2005. Although higher volatility is often equated with higher prices, this pattern illustrates that an increase in volatility does not necessarily mean that price levels will increase. In other words, price volatility measures the variability of prices rather than the direction of the price changes. We also observed that at the same time that prices were rising and that volatility was generally above or near long-term averages, futures markets saw an increase in the number of noncommercial traders such as managed money traders, including hedge funds. The trends in prices and volatility made the energy derivatives markets attractive for the growing number of traders that were looking to either hedge against those changes or profit from them. Using CFTC’s large trader data, we found that from July 2003 to December 2006 crude oil futures and options contracts experienced the most dramatic increase, with the average number of noncommercial traders more than doubling from about 125 to about 286. As shown in figure 1, while the growth was less dramatic in the other commodities, the average number of noncommercial traders also showed an upward trend for unleaded gasoline, heating oil, and natural gas. Not surprisingly, our preliminary work also revealed that as the number of traders increased, so did the trading volume on NYMEX for all energy futures contracts, particularly crude oil and natural gas. Average daily contract volume for crude oil increased by 90 percent from 2001 through 2006, and natural gas increased by just over 90 percent. Unleaded gasoline and heating oil experienced less dramatic growth in their trading volumes over this period. Another notable trend, but one that is much more difficult to quantify, was the apparently significant increase in the amount of energy derivatives traded outside exchanges. Trading in these markets is much less transparent, and comprehensive data are not available because these energy markets are not regulated. While the Bank for International Settlements publishes data on worldwide OTC derivative trading volume for broad groupings of commodities, this format can be used only as a rough proxy for trends in the trading volume of OTC energy derivatives. According to these data, the notional amounts outstanding of OTC commodity derivatives excluding precious metals, such as gold, grew by over 850 percent from December 2001 to December 2005. In the year from December 2004 to December 2005 alone, the notional amount outstanding increased by more than 200 percent to over $3.2 trillion. Despite the lack of comprehensive energy-specific data on OTC derivatives, the recent experience of individual trading facilities further reveals the growth of energy derivatives trading outside of futures exchanges. For example, according to its annual financial statements, the volume of non-futures energy contracts traded on the Intercontinental Exchange, also known as ICE, including financially settled derivatives and physical contracts, increased by over 400 percent to over 130 million contracts in 2006. Further, while some market observers believe that managed money traders were exerting upward pressure on prices by predominantly buying futures contracts, CFTC data we analyzed revealed that from the middle of 2003 through the end of 2006, the trading activity of managed money participants became increasingly balanced between buying (those that expect prices to go up) and selling (those that expect prices to go down). That is, our preliminary view of these data suggests that managed money traders as a whole were more or less evenly divided in their expectations about future prices than they had been in the past. We found that views were mixed about whether these trends had any upward pressure on prices. Some market participants and observers have concluded that large purchases of oil futures contracts by speculators could have created an additional demand for oil that could lead to higher prices. Contrary to this viewpoint, some federal agencies and other market observers took the position that speculative trading activity did not have a significant impact on prices. For example, an April 2005 CFTC study of the markets concluded that increased trading by speculative traders, including hedge funds, did not lead to higher energy prices or volatility. This study also argued that hedge funds provided increased liquidity to the market and dampened volatility. Still others told us that while speculative trading in the futures market could contribute to short-term price movements in the physical markets, they did not believe it was possible to sustain a speculative “bubble” over time, because the two markets were linked and both responded to information about changes in supply and demand caused by such factors as the weather or geographical events. In the view of these observers and market participants, speculation could not lead to artificially high or low prices over a long period of time. The developments in the derivatives markets in recent years have not occurred in isolation. Conditions in the physical markets were also undergoing changes that could help explain increases in both derivative and physical commodity prices. As we have reported, futures prices typically reflect the effects of world events on the price of the underlying commodity such as crude oil. For example, political instability and terrorist acts in countries that supply oil create uncertainties about future supplies that are reflected in futures prices in anticipation of an oil shortage and expected higher prices in the future. Conversely, news about a new oil discovery that would increase world oil supply could result in lower futures prices. In other words, futures traders’ expectations of what may happen to world oil supply and demand influence their price bids. According to the Energy Information Administration (EIA), world oil demand has grown from about 59 million barrels per day in 1983 to more than 85 million barrels per day in 2006 (fig. 2). While the United States accounts for about a quarter of this demand, rapid economic growth in Asia has also stimulated a strong demand for energy commodities. For example, EIA data shows that from 1983 to 2004, China’s average daily demand for crude oil increased almost fourfold. The growth in demand does not, by itself, lead to higher prices for crude oil or any other energy commodity. For example, if the growth in demand were exceeded by a growth in supply, prices would fall, other things remaining constant. However, according to EIA, the growth in demand outpaced the growth in supply, even with spare production capacity included in supply. Spare production capacity is surplus oil that can be produced and brought to the market relatively quickly to rebalance the market if there is a supply disruption anywhere in the world oil market. As shown in figure 3, EIA estimates that global spare production capacity in 2006 was about 1.3 million barrels per day, compared with spare capability of about 10 million barrels per day in the mid-1980s and 5.6 million barrels a day as recently as 2002. Major weather and political events can also lead to supply disruptions and higher prices. In its analysis, EIA has cited the following examples: Hurricanes Katrina and Rita removed about 450,000 barrels per day from the world oil market from June 2005 to June 2006. Instability in major oil-producing countries of the Organization of Petroleum Exporting Countries (OPEC), such as Iraq and Nigeria, have lowered production in some cases and increased the risk of future production shortfalls in others. Oil production in Russia, a major driver of non-OPEC supply growth during the early 2000s, was adversely affected by a worsened investment climate as the government raised export and extraction taxes. The supply of crude oil affects the supply of gasoline and heating oil, and just as production capacity affects the supply of crude oil, refining capacity affects the supply of those products distilled from crude oil. As we have reported, refining capacity in the United States has not expanded at the same pace as the demand for gasoline. Inventory, another factor affecting supplies and therefore prices, is particularly crucial to the supply and demand balance, because it can provide a cushion against price spikes if, for example, production is temporarily disrupted by a refinery outage or other event. Trends toward lower levels of inventory may reduce the costs of producing gasoline, but such trends may also cause prices to be more volatile. That is, when a supply disruption occurs or there is an increase in demand, there are fewer stocks of readily available gasoline to draw on, putting upward pressure on prices. However, others noted a different trend for crude oil inventories. That is, prices have remained high despite patterns of higher levels of oil in inventory. In addition to the supply and demand factors that generally apply to all energy commodities, specific developments can affect particular commodities. For instance, the growth of special gasoline blends—so- called “boutique fuels”—can affect the price of gasoline. As we have reported, it is generally agreed that the higher costs associated with supplying special gasoline blends contributed to higher gasoline prices, either because of more frequent or more severe supply disruptions or because the costs were likely passed on, at least in part, to consumers. Like the futures market, the physical market has undergone substantial changes that could affect prices. But market participants and other observers disagree about the impact of these changes on increasing energy prices. Some observers believe that higher energy prices were solely the result of supply and demand fundamentals, while others believe that increased futures trading activity contributed to higher prices. Another consideration is that the value of the U.S. dollar on open currency markets could also affect crude oil prices. For example, because crude oil is typically denominated in U.S. dollars, the payments that oil-producing countries receive for their oil are also denominated in U.S. dollars. As a result, a weak U.S. dollar decreases the value of the oil sold at a given price, and oil-producing countries may wish to increase prices for their crude oil in order to maintain the purchasing power in the face of a weakening U.S. dollar. The relative effect of each of these changes remains unclear, however, because all of the changes were occurring simultaneously. Monitoring these trends and patterns in the future will be important in order to better understand their effects, protect the public, and ensure market integrity. Energy products are traded on multiple markets, some of which are subject to varying levels of CFTC oversight and some of which are not. This difference in oversight has caused some market observers to question whether CFTC needs broader oversight authority. As we have seen, under the CEA CFTC’s regulatory authority is focused on overseeing futures exchanges, protecting the public, and ensuring market integrity. But in recent years two additional venues for trading energy futures contracts that are not subject to direct CFTC oversight have grown and become increasingly important—exempt commercial markets and OTC markets. However, traders in these markets are subject to the CEA’s antimanipulation and antifraud provisions, which CFTC has the authority to enforce. Also, exempt commercial markets must provide CFTC with data for certain contracts. Futures exchanges such as NYMEX are subject to direct CFTC regulation and oversight. CFTC generally focuses on fulfilling three strategic goals related to these exchanges. First, to ensure the economic vitality of the commodity futures and options markets, CFTC conducts its own direct market surveillance and also reviews the surveillance efforts of the exchanges. Second, to protect market users and the public, CFTC promotes sales practice and other customer protection rules that apply to futures commission merchants and other registered intermediaries. Finally, to ensure the market’s financial integrity, CFTC reviews the audit and financial surveillance activities of self-regulatory organizations. CFTC conducts regular market surveillance and oversight of energy trading on NYMEX and other futures exchanges. Oversight activities include: detecting and preventing disruptive practices before they occur and keeping the CFTC commissioners informed of possible manipulation or abuse; monitoring NYMEX’s compliance with CFTC reporting requirements and its enforcement of speculative position limits; investigating traders with large open positions; and documenting cases of improper trading. In contrast to the direct oversight it provides to futures exchanges, CFTC does not have general oversight authority over exempt commercial markets, where qualified entities may trade through an electronic trading facility. According to CFTC officials, these markets have grown in prominence in recent years. Some market observers have questioned their role in the energy markets and the lack of transparency about their trading activities. Trading energy derivatives on exempt commercial markets is permissible only for eligible commercial entities—a category of traders broadly defined in the CEA to include firms with a commercial interest in the underlying commodity—as well as other sophisticated investors such as hedge funds. These markets are not subject to CFTC’s general direct oversight but are required to maintain communication with CFTC. Among other things, an exempt commercial market must notify CFTC that it is operating as an exempt commercial market and must comply with certain CFTC informational, record-keeping, and other requirements. Energy derivatives also may be traded OTC rather than via an electronic trading facility. OTC derivatives are private transactions between sophisticated counterparties, and there is no requirement for parties involved in these transactions to disclose information about their transactions. Derivatives transactions in both exempt commercial markets and OTC markets are bilateral contractual agreements in which each party is subject to and assumes the risk of nonperformance by its counterparty. These agreements differ from derivatives traded on an exchange where a central clearinghouse stands behind every trade. While some observers have called for more oversight of OTC derivatives, most notably for CFTC to be given greater oversight authority over this market, others consider such action unnecessary. Supporters of more CFTC oversight authority believe that more transparency and accountability would better protect the regulated markets and consumers from potential abuse and possible manipulation. Some question how CFTC can be assured that trading on the OTC market is not adversely affecting the regulated markets and ultimately consumers, given the lack of information about OTC trading. However, in 1999 the President’s Working Group on Financial Markets concluded that OTC derivatives generally were not subject to manipulation because contracts were settled in cash based on a rate or price determined in a separate highly liquid market and did not serve a significant price discovery function. Moreover, the market is limited to professional counterparties that do not need the protections against manipulation that CEA provides to retail investors. Finally, the group has recently noted that if there are concerns about CFTC’s authority, CFTC’s enforcement actions against energy companies are evidence that the CFTC has adequate tools to combat fraud and manipulation when it is detected. The lack of reported data about off-exchange markets makes addressing concerns about the function and effect of these markets on regulated markets and entities challenging. CFTC officials have said that while they have reason to believe these off-exchange activities can affect prices determined on a regulated exchange, they also generally believe that the commission has sufficient authority over OTC derivatives and exempt energy markets. However, CFTC has recently begun to take steps to clarify its authority to obtain information about pertinent off-exchange transactions. In a June 2007 proposed rulemaking, CFTC noted that having data about the off-exchange positions of traders with large positions on regulated futures exchanges could enhance the commission’s ability to deter and prevent price manipulation or any other disruptions to the integrity of the regulated futures markets. According to CFTC officials, the commission has also proposed amendments to clarify its authority under the CEA to collect information and to bring fraud actions in principal-to-principal transactions in these markets, enhancing CFTC’s ability to enforce antifraud provisions of CEA. In closing, our work to date shows that the derivatives and physical markets have both undergone substantial change and evolution. Given the changes in both markets, causality is unclear, and the situation warrants ongoing review and analysis. We commend the Subcommittee’s efforts in this area. Along with the overall concern about rising prices, questions have also been raised about CFTC’s authority to protect investors from fraudulent, manipulative, and abusive practices. CFTC generally believes that the commission has sufficient authority over OTC derivatives and exempt energy markets. However, CFTC has taken an important step by clarifying its authority to obtain information about pertinent off-exchange transactions. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions that you or other Members of the Subcommittee might have. For further information about this testimony, please contact Orice M. Williams on (202) 512-8678 or at williamso@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions include John Wanska (Assistant Director), Kevin Averyt, Ross Campbell, Emily Chalmers, John Forrester, and Paul Thompson. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Energy prices for crude oil, heating oil, unleaded gasoline, and natural gas have risen substantially since 2002, generating questions about the reasons for the increase. Some observers believe that the higher energy prices were solely due to supply and demand fundamentals while others believe that increased futures trading activity may also have contributed to higher prices. This testimony highlights GAO's preliminary findings related to (1) trends and patterns in the futures and physical energy markets and the effect of these trends on energy prices and (2) the Commodity Futures Trading Commission's (CFTC) regulatory and enforcement authority over derivatives markets. GAO analyzed futures and large trader reporting data; trading data obtained from the New York Mercantile Exchange (NYMEX) for crude oil, heating oil, unleaded gasoline, and natural gas; and various other sources of energy-related data. GAO also analyzed relevant academic and other studies on the subject and interviewed market participants, experts, and officials at relevant federal agencies. Rising energy prices have been attributed to a variety of factors, and recent trends in the futures and physical markets highlight the changes that have occurred in both markets from 2002 through 2006. Specifically: (1) inflation-adjusted energy prices in both the futures and physical markets increased by over 200 percent during this period for three of the four commodities we reviewed; (2) volatility (a measurement of the degree to which prices fluctuate over time) in energy futures prices generally remained above historic averages during the beginning of the time period but declined through 2006 for three of the four commodities we reviewed; and (3) the number of noncommercial participants in the futures markets including hedge funds, has grown; along with the volume of energy futures contracts traded; and the volume of energy derivatives traded outside traditional futures exchanges. At the same time these changes were occurring in the futures markets for energy commodities, tight supply and rising demand in the physical markets pushed prices higher. For example, while global demand for oil has risen at high rates, spare oil production capacity has fallen since 2002, and increased political instability in some of the major oil-producing countries has threatened the supply of oil. Refining capacity also has not expanded at the same pace as the demand for gasoline. The individual effect of these collective changes on energy prices is unclear, as many factors have combined to affect energy prices. Monitoring these changes will be important to protect the public and ensure market integrity. Based on its authority under the Commodity Exchange Act (CEA), CFTC primarily focuses its oversight on the operations of traditional futures exchanges, such as NYMEX, where energy futures are traded. However, energy derivatives are also traded on other markets, namely exempt commercial markets and over-the-counter (OTC) markets--both of which have experienced increased volumes in recent years. Exempt commercial markets are electronic trading facilities that trade exempt commodities between eligible participants, and OTC markets involve eligible parties that can enter into contracts directly off-exchange. Both of these markets are exempt from general CFTC oversight, but they are subject to the CEA's antimanipulation and antifraud provisions and CFTC enforcement of those provisions. Because of these varying levels of CFTC oversight, some market observers question whether CFTC needs broader authority over all derivative markets. CFTC generally believes that the commission has sufficient authority over OTC derivatives and exempt energy markets. However, CFTC has recently taken additional actions to clarify its authority to obtain information about pertinent off-exchange transactions. |
The SBIR program was initiated in 1982 and has four purposes: (1) to use small businesses to meet federal R&D needs, (2) to stimulate technological innovation, (3) to increase commercialization of innovations derived from federal R&D efforts, and (4) to encourage participation in technological innovation by small businesses owned by disadvantaged individuals and women. The purpose of the STTR program—initiated in 1992—is to stimulate a partnership of ideas and technologies between innovative small businesses and research institutions through federally funded R&D. The SBIR and STTR programs are similar in that participating agencies identify topics for R&D projects and support small businesses, but the STTR program requires the small business to partner with a research institution—such as a nonprofit college or university or federally funded R&D center. The programs are currently authorized through fiscal year 2022. The Small Business Act, which authorizes the programs, establishes the amount of an agency’s funding that must be spent on the SBIR and STTR programs each year. In general, the programs are similar across participating agencies. All of the participating agencies follow the same general process to obtain proposals from and make awards to small businesses for both the SBIR and STTR programs. While the Small Business Act requires participating agencies to manage their programs to meet the requirements of the act, the policy directives, and SBA regulations, each participating agency has considerable flexibility in designing and managing the specifics of its programs under these requirements, such as determining research topics, selecting award recipients, and administering funding agreements. At least once per year, each participating agency issues a solicitation requesting proposals for projects in topic areas determined by the agency. Each participating agency uses its own process to review proposals and determine which proposals should receive awards. Those agencies that have both SBIR and STTR programs usually use the same process for both programs. Small businesses are allowed to apply with the same proposal to multiple agencies, but the small business is not allowed to accept multiple awards for the same work. In August 2009, the Senate Committee on Commerce, Science, and Transportation held a hearing on fraud, waste, and abuse in the SBIR program. Shortly after that hearing, the Council of the Inspectors General on Integrity and Efficiency’s Misconduct in Research Working Group began to discuss fraud in the SBIR and STTR programs and to coordinate efforts related to these programs among the OIGs from SBA and each of the 11 participating agencies. The working group also established a separate subgroup of investigative agents from SBA, the 11 participating agencies’ OIGs, and the Department of Justice to share information on ongoing cases, lessons learned, and best practices related to SBIR investigations. The reauthorization act required SBA to add fraud, waste, and abuse prevention requirements to the policy directives for agencies to implement. In 2012, SBA issued revised policy directives for the SBIR and STTR programs that included new requirements designed to help agencies prevent potential fraud, waste, and abuse in the SBIR and STTR programs. SBA developed these requirements in consultation with participating agencies and a working group of OIGs. The fraud, waste, and abuse sections of the SBIR and STTR policy directives contain the same information and requirements. To meet the 10 requirements, each participating agency must, at a minimum, take the actions summarized below: Require certifications from award recipients that they are in compliance with specific program requirements at the time of the award, as well as after the award and during the life cycle of the funding agreement. Include information explaining how an individual can report fraud, waste, and abuse on the agency’s respective program website and in each funding solicitation using the method provided by the agency’s OIG, such as publishing the agency’s fraud hotline number. Designate at least one individual in the agency to, at a minimum, serve as the liaison for the SBIR or STTR program, the OIG, and the agency’s suspension and debarment official and ensure that inquiries regarding fraud, waste, and abuse are referred to the appropriate office. Include on its program website information concerning successful prosecutions of fraud, waste, and abuse in the programs. Establish a written policy requiring all personnel involved with the program to notify the OIG if anyone suspects fraud, waste, and/or abuse and ensure the policy is communicated to all personnel. Create or ensure there is an adequate system to enforce accountability by developing separate standardized templates for referrals to the OIG and the Suspension and Debarment Official, as well as a process for tracking such referrals. Ensure compliance with program eligibility requirements and terms of funding agreements. Work with the agency’s OIG in its efforts to establish fraud detection indicators; coordinate sharing of information on fraud, waste, and abuse between federal agencies; and improve education and training to program officials, applicants, and award recipients. Develop policies and procedures to avoid funding essentially equivalent work already funded by another agency. Consider enhanced reporting requirements during the funding agreement. According to the Inspector General Act of 1978, as amended, agency OIGs are established in order to create independent and objective units to provide leadership and coordination and recommend policies for activities designed to promote economy, efficiency, and effectiveness in the administration of agencies’ programs and prevent and detect fraud and abuse in their agencies’ programs and operations. Additional purposes of agency OIGs include conducting and supervising audits, inspections, and investigations. Furthermore, according to the act, OIGs are to provide a means for keeping the head of the agency and Congress fully and currently informed about problems and deficiencies related to the administration of such programs and operations and the need for and progress of corrective action. In addition to the requirements for the participating agencies, the reauthorization act included requirements for the participating agencies’ OIGs. Each OIG is to cooperate to prevent fraud, waste, and abuse in the SBIR program and the STTR program by: establishing fraud detection indicators; reviewing regulations and operating procedures of the federal agency; coordinating information sharing between Federal agencies, to the extent otherwise permitted under federal law; and improving the education and training of and outreach to: administrators of the SBIR program and the STTR program of the applicants to the SBIR program or the STTR program; and recipients of awards under the SBIR or STTR program. In addition, each participating agency’s OIG is to submit an annual report to specified congressional committees detailing any SBIR or STTR fraud, waste, or abuse investigations over the past year, including the costs for those investigations, among other things. The Small Business Act and the SBIR and STTR policy directives outline SBA’s responsibilities for overseeing the SBIR and STTR programs. Specifically, the Small Business Act requires SBA to survey and monitor the operation of the SBIR and STTR programs at the agency level. Further, the policy directives state that SBA is responsible for ensuring that each participating agency has taken steps to maintain a fraud, waste, and abuse prevention system to minimize their impact on the programs. Our Framework for Managing Fraud Risks in Federal Programs (Fraud Risk Framework) provides comprehensive guidance for conducting fraud risk assessments and using the results as part of the development of a robust antifraud strategy. It also describes concepts and leading practices for establishing an organizational structure and culture that are conducive to fraud risk management, designing and implementing controls to prevent and detect potential fraud, and monitoring and evaluating fraud risk management activities. Agencies have varied in their implementation of the SBIR and STTR fraud, waste, and abuse prevention requirements in SBA’s policy directives, although some agencies have taken actions beyond those required in the policy directives. Beyond issuing the policy directives, however, SBA has taken few actions to oversee the participating agencies’ implementation of the fraud waste, and abuse prevention requirements. Further, SBA has not evaluated the fraud, waste, and abuse requirements in the directives to determine whether the requirements are clear and effective. Agencies have varied in their implementation of the 10 minimum fraud, waste, and abuse prevention requirements included in SBA’s policy directives. All 11 agencies fully implemented 2 of the requirements: to include how to report fraud, waste, and abuse on the agency’s website and solicitation, and to designate a liaison for the OIG and Suspension and Debarment Official. For 6 other requirements, more than half of the agencies had fully implemented the requirements while the other agencies had partially or not implemented them. For the remaining 2 requirements, 3 agencies had fully implemented 1, while only 1 agency had fully implemented the other. Figure 1 shows the number of agencies that implemented each of the requirements, and appendix II provides additional information on individual agencies’ implementation of the requirements. Agencies varied in their degrees of implementation for some requirements. For example, NASA is the only agency that fully implemented all three components of the requirement to develop templates for referrals to the OIG and Suspension and Debarment Official. Three agencies did not implement any of the components of the requirement. Of the 7 agencies that partially implemented this requirement, none had both templates, 2 agencies did not have a tracking system in place, and 1 agency had a partial tracking system in place. Officials from most agencies said that they either use their OIG’s standard process or contact their OIG in person, by telephone, or e-mail to make referrals to the OIG. Additionally, 3 agencies—HHS, NASA, and NSF— implemented all of the components of the requirement to coordinate with the OIG on fraud, waste, and abuse, while the other agencies varied in their implementation of the components. For example, 7 agencies’ SBIR or STTR offices did not implement or fully implement the requirement to work with their OIG to establish fraud detection indicators. In addition to the variation in the number of agencies that have implemented each requirement, we found that agencies varied in how they implemented the requirements. Some examples of how agencies have implemented requirements include the following: Require certifications from awardees. Small businesses must certify that they are eligible for the SBIR and STTR programs and that they meet specific program requirements during the life of the funding agreement. Six agencies fully implemented and 5 agencies partially implemented the certification requirement. The agencies that fully implemented the requirement used the language required by the policy directives and told us they collected all the certifications. However, we identified some variation in how the remaining agencies implemented the requirement. For example, 1 agency—HHS— requires awardees to sign certifications at the appropriate times but does not collect the signed life cycle certifications from small businesses. Instead, awardees complete the life cycle certifications and maintain them on file at their locations. HHS officials told us that this practice is in accordance with HHS’s records and retention policy and the SBIR contract solicitation, and that they believe HHS is compliant with the policy directives because small businesses must provide the life cycle certifications upon request. The remaining 4 agencies that partially implemented the requirement required small businesses to submit certifications, but did not use the certification language that is required by the policy directives. List examples of successful prosecutions on website. Six agencies fully implemented the requirement to post examples of successful fraud prosecutions on their SBIR website and 2 agencies partially implemented the requirement with one of their components posting the information. The agencies that posted examples of successful prosecutions implemented the requirement differently, in the absence of specific direction. Specifically, 4 agencies posted the information on a page designated for reporting fraud, waste, and abuse and 3 agencies posted the information on other pages, where it could be more difficult for applicants or awardees to find. For example, 1 agency posted an example of a prosecution with other program news stories, such as announcements of solicitations. However, as of February 2017, the information was on the second page of the list of news stories, making it difficult to find. Additionally, 1 agency posted the information on an internal website that award recipients or applicants cannot typically access. Officials from the 3 agencies that did not post such examples told us they did not have any successful prosecutions of their own to post or did not know where to find such information. Consider enhanced reporting requirements. Nine agencies fully implemented the requirement to consider enhanced reporting requirements during the funding agreement; however, interpretations of how to implement the requirement varied widely among the agencies. The policy directives require agencies only to consider enhanced reporting requirements during the funding agreement and do not require agencies to implement such reporting requirements. Officials from the agencies that had implemented the requirement said they typically interpreted enhanced reporting as monthly invoices and reports, project demonstrations, or additional certifications. We identified one instance in which an agency has not fully implemented a requirement and this could affect its ability to prosecute fraud, waste, and abuse in the programs. As discussed previously, HHS requires small business awardees to sign certifications at the appropriate times, but these small businesses are not required to submit the life cycle certifications to the agency. HHS OIG officials told us that they raised concerns about this practice to HHS SBIR officials based, in part, on their 2014 review of the HHS SBIR and STTR programs. Of the 11 participating agencies, HHS is the only agency that administers but does not collect the self-certifications from SBIR and STTR applicants or awardees, according to agency officials. Having copies of the certification forms allow agencies to document small businesses’ assurance that they are aware of and agree to comply with program requirements. HHS OIG officials told us they continue to be concerned that HHS does not require the small businesses to submit the SBIR and STTR certifications to the SBIR officials. In addition, OIGs at most of the participating agencies said that they often use these self-certifications to show intent to commit fraud, if any fraud later occurs, as the small businesses are certifying the accuracy and truth of the information they are submitting to the agency. Without collecting copies of the certification forms, it may be more difficult to prosecute HHS SBIR or STTR awardees if they commit fraud, waste, or abuse. In addition to the requirements in the policy directives, officials from 9 agencies told us they implemented other activities to help identify or prevent fraud, waste, and abuse in the programs. Examples of agency activities include the following: Conducting site visits. Officials from 4 agencies— Education, DOE, DHS, and NASA—told us they conduct site visits to their SBIR or STTR awardees, either in person or virtually, in part to identify and prevent fraud, waste, and abuse. Site visits allow officials to view, either in person or remotely, the awardee’s research efforts and can confirm that the necessary facilities exist for technical R&D work. According to Department of Education officials, its virtual site visits serve as a way to confirm that that the SBIR awardee has completed the work proposed. Establishing offices or working groups. Two agencies have established or are planning to establish offices or working groups to address potential fraud, waste, and abuse issues. For example, NASA established the Acquisition Integrity Program, which is an office that works as the liaison between the agency’s SBIR and STTR program staff and the OIG, as one way to help reduce and prevent fraud, waste, and abuse in the programs. This office monitors the coordination of criminal, civil, contractual, and administrative remedies for significant investigations of fraud or corruption related to procurement activities agency-wide, among other things. In addition, DOD’s SBIR and STTR program manager told us that, beginning in early 2017, he plans to convene a working group comprised of SBIR and STTR program officials and investigative service staff from the Army, Navy, Air Force, and other parts of DOD to discuss best practices and share information regarding fraud, waste, and abuse issues. Orientation meetings. According to officials, 3 agencies—DHS, EPA, and NSF—hold in-person orientation meetings with awardees to provide an overview of the agency’s SBIR or STTR program and related rules and requirements, including a presentation on fraud, waste, and abuse. Certifications and Reporting. Six agencies require more certifications or reporting from awardees than is required in the policy directives. Specifically, officials from these agencies told us they require additional certifications beyond the three mandated life cycle certifications: (1) at the time of final payment or disbursement of Phase I funding, (2) before more than half of Phase II funding has been paid or disbursed, and (3) before final payment or disbursement of Phase II funding. For example, DOT and some Commerce officials told us that they require awardees to submit life cycle certifications with every invoice. Army, EPA, and NASA officials told us that they require monthly or quarterly reporting by awardees. Further, both Army and NSF officials told us that they require the additional reporting in order for awardees to receive their SBIR or STTR funds. Representatives from each of the 11 SBIR and STTR small businesses that we interviewed said that they had not experienced any challenges in complying with the programs’ fraud, waste, and abuse prevention requirements. Of the 10 fraud, waste, and abuse prevention requirements that the agencies must implement, small businesses are directly affected by only 2 requirements: providing life cycle certifications and, if required by their awarding agency, participating in fraud, waste, and abuse training. Representatives from each of the 11 small businesses we interviewed told us they did not find these fraud, waste, and abuse prevention requirements to be burdensome, and representatives from 9 of the small businesses told us that the level of the SBIR and STTR fraud, waste, and abuse prevention requirements are about the right amount. Representatives from nearly all of the small businesses we interviewed said they had seen the effects of the agencies’ implementation of some of the requirements. For example, representatives from 10 of the small businesses reported that at least 1 agency from which they had received an award had provided information to them on how to report fraud, waste, and abuse to the agency’s OIG, either on the agency’s SBIR or STTR website or in the solicitation that they used to apply for funding. Further, representatives from all 11 small businesses that we interviewed said that they had not observed any fraud, waste, or abuse in the SBIR or STTR programs by another small business. SBA has taken few actions to oversee agencies’ implementation of the policy directives’ minimum requirements to address fraud, waste, and abuse in the SBIR and STTR programs. In 2012, SBA convened a group of SBIR and STTR program managers and OIG officials to develop and issue the fraud, waste, and abuse prevention requirements. Additionally, according to SBA officials, SBA checked the agencies’ SBIR and STTR program websites to confirm that each agency provided information on its website and in each solicitation on how to report fraud, waste, and abuse, as required by one of the fraud, waste, and abuse prevention requirements. Further, in 2016, SBA officials said they made some improvements to the SBIR.gov website, which includes SBA’s database of SBIR and STTR awards, to include the full text of proposals for all SBIR or STTR awards, improving the appearance of awards in the analytics section of the website, and improving accuracy of the data by reviewing incoming data for completeness and duplicate entries. By SBA making this information available, SBA officials said that agencies could search on the website to identify essentially equivalent work that could lead to duplicate funding. However, most of the program managers we interviewed said that the website—in its current form—was not useful for searching for duplicate awards. Finally, according to officials, SBA provides opportunities for SBIR and STTR program managers to discuss the implementation of the fraud, waste, and abuse prevention requirements. SBA hosts meetings with SBIR and STTR program managers every 2 months to discuss various aspects of the programs. According to SBA’s meeting agendas, fraud, waste, and abuse was on the agenda once between July 2015 and May 2016, although officials said that program managers could discuss the requirements more frequently, if needed. In addition to the program managers’ meetings, SBA officials said they are considering establishing a working group that could address fraud, waste, and abuse but they had not done so as of January 2017. However, SBA had not taken steps to ensure that each agency had implemented all of the fraud, waste, and abuse requirements. As noted earlier, the Small Business Act requires SBA to survey and monitor the operation of the SBIR and STTR programs at the agency level. Further, the policy directives state that SBA is responsible for ensuring that each participating agency has taken steps to maintain a fraud, waste, and abuse prevention system to minimize their impact on the programs. With the exception of its efforts to confirm that agencies had included information on reporting fraud, waste, and abuse on their websites and in solicitations described above, SBA has not confirmed implementation of the fraud, waste, and abuse prevention requirements. Specifically, SBA officials do not know the status of agencies’ implementation because they have not requested documentation from the agencies or other evidence to determine whether implementation has occurred. Without confirming that participating agencies are implementing the requirements, SBA does not have reasonable assurance that each agency has a system in place to help reduce their vulnerability to fraud, waste, and abuse. SBA officials told us that they believe that they have fulfilled their role to oversee agencies’ implementation of the fraud, waste, and abuse prevention requirements by convening the group of program managers and OIG officials to develop and issue the requirements. However, SBA’s issuance of the requirements neither constitutes oversight of those requirements nor ensures their implementation by the 11 participating agencies. SBA updated the SBIR and STTR policy directives in 2012 to include the fraud, waste, and abuse prevention requirements. However, SBA officials said they have not taken action since 2012 to review the requirements to determine whether they are effective or whether any revisions are needed. We identified requirements that some agency officials said were not clear, or may be unnecessary. We identified 3 requirements that some agency officials said were not clear. List examples of successful prosecutions on agency websites. The policy directives require agencies to list examples of successful fraud prosecutions on their websites, but they do not indicate whether the examples must be from the agency itself or whether the agency is to post examples from other participating agencies in the absence of its own examples. As noted earlier, officials at 3 agencies told us they were unaware of, or did not have any, successful SBIR or STTR prosecutions to post. However, if the requirement is designed to deter fraud, it may be useful for agencies to post examples of prosecutions regardless of the agency in which the example originated. A representative from one small business we interviewed said that seeing the posted prosecutions on an agency’s SBIR website made the agency’s efforts against fraud, waste, and abuse more credible. For this reason, in our assessment of agencies’ implementation of the requirements, we concluded that an agency had not implemented the requirement if it did not include any examples of successful prosecutions on its website, even if the agency said that it did not have any. In addition, the policy directives do not indicate where on the website each agency is supposed to post these examples of successful prosecutions. As mentioned previously, the agencies that posted this information did so in different places on their websites. According to NSF program officials, NSF’s SBIR website has more than 80 pages and the officials indicated that it was unclear where prosecution information should be posted. Additionally, OIG officials at 1 agency raised concerns about the location of the examples that the SBIR program had posted, noting that if the point of posting such prosecutions was to deter fraud, the information should be on the main part of the website to ensure that users saw it. However, officials at 1 agency raised potential privacy concerns about posting information about successful prosecutions on their program website, indicating that it was not clear in the policy directives what kind of information agencies are required to include on their websites. Consider enhanced reporting. The policy directives require agencies only to consider enhanced reporting requirements during the funding agreement and do not require agencies to implement such reporting requirements. As mentioned previously, the 9 agencies that fully implemented this requirement did so in various ways. Some agency officials—including officials at the 2 agencies that had not fully implemented the requirement—told us they were unclear about what the requirement meant. In our assessment of agencies’ implementation of the requirements, we determined that an agency implemented the requirement if it considered any type of reporting not already required by the policy directives. Further, program managers at 1 agency raised a concern that enhanced reporting could constitute an undue burden on the small businesses. Policies to avoid funding essentially equivalent work. The requirement for participating agencies to check “essentially equivalent work” is inconsistent with the definition of that term elsewhere in the policy directives, which could make the requirement unclear. The fraud, waste, and abuse requirement in the policy directives specifies that agencies should check for essentially equivalent work funded by other agencies. Most agencies have implemented the requirement as written, and in our assessment of agencies’ implementation of the requirements, we determined that an agency implemented the requirement if it checked for essentially equivalent work at 1 or more agencies. However, elsewhere in the policy directives, “essentially equivalent work” is defined as work that is substantially the same research in more than one application submitted to the same federal agency or two or more different federal agencies. Two of the 5 agencies that have multiple offices involved in their SBIR or STTR programs did not check, or did not fully check, for essentially equivalent work within their own agencies. SBA officials acknowledged that the policy directives were inconsistent in this regard but said that the agencies should look for the definition elsewhere in the policy directives. However, without consistent definitions of the terms, SBA has no assurance that participating agencies are appropriately checking for such work that they fund as well as such work funded by other agencies, placing agencies at a higher risk of funding essentially equivalent work. Under federal standards for internal control, an oversight body should oversee the entity’s internal control system, and that communication is necessary for effective oversight. The standards state that management is to evaluate and document the results of ongoing monitoring and personnel may report the nature of its findings to the oversight body. According to the standards, the oversight body is to receive quality information on significant matters relating to risks, changes, or issues that impact the entity. As mentioned previously, SBA meets every 2 months with the SBIR and STTR program managers to discuss various issues. According to the SBA officials, no one has raised questions or expressed confusion about implementing the fraud, waste, and abuse prevention requirements. However, given that we identified several areas in which agencies expressed confusion or implemented the requirements differently, SBA may need to more proactively solicit agency information, which would be consistent with internal control standards for an oversight body. Additionally, based on our analysis of agencies’ implementation of the requirements and interviews with program managers and OIG officials, 2 of the requirements—to use a separate, standardized template for SBIR and STTR program officials to make referrals to their OIGs and to require training for SBIR and STTR applicants—may be unnecessary. Only 1 agency fully implemented the requirement to create separate standardized templates for referrals to the agency’s OIG and Suspension and Debarment Official. It is not clear from the requirement as written if any agency templates for referrals to the OIG and to the Suspension and Debarment Official would meet the requirement, or if the templates need to be SBIR or STTR specific. DOT OIG officials said that they considered creating a separate form for reporting fraud, waste, and abuse for the SBIR program, but they determined it was a more efficient process to use their existing standard form. Similarly, EPA OIG officials said it would be a lot of extra work and complications to develop a template specific to the SBIR program, particularly when the existing template used for all other programs is working well. In our assessment of agencies’ implementation of the requirements, we determined that an agency had implemented the requirement if it had separate templates for referrals to the OIG and the Suspension and Debarment Official, regardless of whether these templates were SBIR or STTR specific. In addition, the requirement to train SBIR and STTR applicants—who may or may not receive an SBIR or STTR award—on fraud, waste, and abuse issues is a component of the requirement for agencies to coordinate with the OIG and may present an unnecessary burden on the agencies and OIGs. Based on our analysis, 7 agencies did not fully address this aspect of the requirement. The Air Force, one component within DOD, requires all applicants to take an online training and provide the certificate showing that they completed the training as part of their SBIR or STTR applications. Air Force officials estimated that about half of the applicants for the first solicitation that required this certification were originally ineligible for awards because they did not fully complete the training or demonstrate that they had done so. However, the Air Force provided a grace period so that the small businesses could complete the training and receive the required certification, making them eligible for awards. Based on data from fiscal year 2013, the most recent year for which data are available, of the 20,200 SBIR Phase 1 applications that agencies received, about 15 percent received SBIR awards. Similarly, in 2013, of the about 2,700 STTR Phase 1 applications that agencies received, about 18 percent received STTR awards. Moreover, SBA has not evaluated whether any of the requirements need to be updated. In 2012, SBA included templates in its policy directives that include language that agencies are supposed to use for the certifications required of the small businesses. Officials from 1 agency— NASA—said that they originally used the language included in SBA’s sample certification, but as a result of lessons learned from working with the Department of Justice on SBIR fraud prosecutions, NASA has made revisions to strengthen its self-certification forms. However, NASA program officials told us they did not notify SBA of the change in the language. SBA officials told us they met with NASA program officials in March 2017 to discuss the changes in the self-certification language. Leading practices in our Fraud Risk Framework state that agencies are responsible for evaluating outcomes using a risk-based approach and adapting activities to improve fraud risk management. In this context, an evaluation of the outcomes could include assessing whether the requirements are necessary and meeting their intended purposes; are placing an undue burden on the agencies; or otherwise need to be revised, updated, or eliminated, among other things. However, SBA officials told us that they have not evaluated the outcomes of participating agencies’ implementation of the fraud, waste, and abuse prevention requirements. Without evaluating the outcomes of the requirements, SBA does not have reasonable assurance that the requirements are necessary, appropriate, and meet the intended purpose of preventing fraud, waste, and abuse in SBIR and STTR programs and cannot change them accordingly. Most OIGs for the 11 participating agencies have implemented the majority of their SBIR and STTR fraud, waste, and abuse prevention requirements, as specified in the reauthorization act, with 2 OIGs engaging in additional activities to prevent and address fraud. However, the OIGs for the military services—Army, Air Force and Navy—are not implementing the requirements, or delegating them to their investigative services. Most OIGs for the participating agencies have implemented the majority of their fraud, waste, and abuse prevention requirements for the SBIR and STTR programs that were included in the reauthorization act. For example, OIGs at 5 of the 11 agencies—Education, DOE, HHS, NASA, and NSF—have implemented all of the fraud, waste, and abuse prevention requirements for the programs. The 6 other OIGs have varied in their implementation of the requirements. Between 5 and 11 OIGs have fully implemented each of the requirements. Specifically, all 11 of the OIGs have established fraud detection indicators and shared information on fraud, waste, and abuse. Most of the OIGs have provided training for SBIR and STTR administrators, as well as training for SBIR and STTR awardees. (See fig. 2 and app. III for more information on the OIGs’ implementation of fraud, waste, and abuse prevention requirements). Officials from 4 of the 6 OIGs that did not implement all of their requirements said that they had not done so because they were not previously aware of the specific SBIR and STTR fraud, waste, and abuse prevention requirements for the OIGs but said they would do so in the future. In addition, officials from the 3 agencies whose OIGs had not provided fraud, waste, and abuse training to SBIR or STTR awardees told us that they plan to do so in the future. In addition, the reauthorization act requires participating agencies’ OIGs to submit annual reports to specified congressional committees detailing any SBIR or STTR fraud, waste, or abuse investigations over the past year, including the costs for those investigations, among other things. Each of the 11 agencies’ OIGs submitted reports for each of the 4 years. Some OIGs also have implemented or plan to implement additional activities beyond those required in the reauthorization act. Examples of these activities include the following: NASA OIG officials told us that OIG staff make in-person visits to some NASA SBIR or STTR awardees to check on their status, including checking for any fraud, waste or abuse issues. The OIG staff are then able to share this information with the NASA program staff. DOE OIG officials said they generally plan to conduct audits of the agency’s SBIR programs every 3 to 4 years, in an effort to consistently offer recommendations to improve and strengthen DOE’s SBIR programs, including any issues they find related to fraud, waste or abuse. The OIGs have also shared information through an interagency working group focusing on fraud, waste, and abuse in the SBIR and STTR programs, which is organized by the Council of the Inspectors General on Integrity and Efficiency. The NSF OIG has taken leadership of this working group on behalf of the Council, and DOE and NASA OIG officials serve as co-chairs of the group. Representatives from each participating agency’s OIG attend this working group, which meets quarterly. The SBA OIG does not officially participate in this group, but an SBA OIG official told us he has attended some of the meetings and, additionally, has provided support on specific SBIR or STTR fraud cases, among other things. OIG officials told us this working group provides a forum for the members to share information and best practices related to identifying and preventing fraud, waste, and abuse in the SBIR and STTR programs. For example, in its December 2016 meeting, the group discussed ways to implement the requirement in the reauthorization act to train applicants and awardees, according to officials from the NSF OIG. As part of the working group, the NSF OIG has sponsored two conferences—in 2011 and 2016—to share information on fraud, waste, and abuse in the SBIR and STTR programs. The 2016 conference included speakers from SBA, the participating agencies’ OIGs and their SBIR program leaders, as well as Department of Justice prosecutors who had brought SBIR and STTR fraud cases to trial. Approximately 200 officials from the OIGs, SBIR and STTR agency program offices, and the Department of Justice attended this 1-day conference. The NSF OIG has also developed informational materials on SBIR fraud to share with other participating agencies’ OIGs. As with the agency requirements for the SBIR and STTR programs, most of the fraud, waste, and abuse prevention requirements for the OIGs do not directly affect small businesses, except for the requirement to conduct fraud, waste, and abuse training. Representatives we interviewed from 8 of the 11 small businesses said that their businesses had taken training on fraud, waste, and abuse issues provided by an agency that gave them an SBIR or STTR award. Representatives from each of the 8 small businesses said that such fraud, waste, and abuse training for SBIR and STTR awardees was useful. For example, a representative from 1 small business told us that it was important for awardees to know the rules about fraud, waste, and abuse in the programs. This representative said that requiring the same fraud, waste, and abuse training of all awardees is important so that all of the small businesses are held to the same standards for the program. None of the small business representatives we interviewed said that they saw the fraud, waste, and abuse training as a burden. For example, two representatives we interviewed from 1 small business said that the time they had spent on the training was a relatively short time commitment to get a fairly significant amount of taxpayer funds for their project. Officials from some OIGs identified some challenges in implementing their fraud, waste and abuse prevention requirements and addressing fraud, waste, and abuse in the 11 agencies’ programs, including: Submitting annual reports to Congress. Three OIGs told us that they found it a challenge to submit the required reports to congressional committees. For example, OIG officials at 1 agency said that because the SBIR or STTR investigations tend to last longer than 1 year, the annual report to Congress may not provide a full picture of all of the SBIR or STTR investigative work that the agency OIG has done in that year. In terms of the timing and reporting period of reports, the reauthorization act requires the OIGs to submit the annual report to the committees by October 1 each year on activities conducted over the past year. The OIG working group discussed this requirement and interpreted it to mean that they had to report on the investigations from the preceding fiscal year but found that it would be challenging to report the information accurately for the fiscal year on the first day of the new fiscal year, according to an NSF OIG official. For that reason, the working group notified the relevant congressional committees that they would submit the reports by November 1 instead; the NSF OIG official told us that the working group received no objections to this date change. Relative priority of the SBIR or STTR programs compared to other programs. Officials from 5 agencies’ OIGs or investigative services told us that SBIR and STTR investigations are generally a lower priority for them because the programs represent a relatively small amount of money compared to other programs that their agencies fund. These officials said that the relatively small budgets involved for the SBIR and STTR programs present a challenge for them to investigate potential fraud, waste, and abuse in the SBIR and STTR programs because they need to prioritize their investigations in areas where the agencies spend more money, such as on larger programs that their agencies fund. For example, HHS OIG officials told us their main focus is on Medicare and Medicaid fraud, because those programs represent the majority of their agency’s funding. Communication between OIGs and agencies. OIG officials from 2 agencies that have multiple offices that implement SBIR or STTR programs told us that they struggle to coordinate with all of the relevant SBIR and STTR offices within their agency and to know what each office has done, or not done, in its activities. For example, OIG officials from 1 agency told us that it is hard for them to identify the right people in each office to include on calls or to learn how each office oversees the SBIR program and complies with the SBIR requirements, among other things. DOD is unique among the SBIR and STTR agencies in that oversight and audit responsibilities are separated between various OIGs and specific investigative services. According to the reauthorization act, each participating agency’s OIG is responsible for implementing the fraud, waste, and abuse prevention requirements, and under the Small Business Act, a participating agency includes DOD’s military departments. However, the DOD OIGs do not investigate SBIR and STTR fraud; instead, such investigations are conducted by separate investigative services within each DOD component, according to a DOD investigative official. We found that the military service OIGs—Army, Air Force, and Navy—are neither implementing all of their SBIR and STTR fraud, waste, and abuse prevention requirements, nor delegating the completion of these requirements to the investigative services. Specifically, we found that none of the three military services’ OIGs had taken actions to implement their SBIR fraud, waste, and abuse prevention requirements, although the DOD OIG had taken some steps to implement these requirements. Specifically, the DOD OIG established fraud detection indicators and has shared information with other federal agencies on SBIR and STTR fraud cases through its involvement in the OIG working group on SBIR and STTR fraud, waste, and abuse. However, the DOD OIG has only partially reviewed the SBIR and STTR regulations and operating procedures for the three services—Army, Air Force, and Navy—that constitute the largest budgets for SBIR and STTR programs in DOD. Moreover, while the DOD OIG has been involved in the training of SBIR and STTR program staff, it has not coordinated or been involved in the training of SBIR and STTR applicants or awardees. On the other hand, the Army, Air Force, and Navy investigative services have implemented several of the SBIR and STTR fraud, waste, and abuse prevention measures that are required of the OIGs. For example, representatives from the Naval Criminal Investigative Service, the Army’s Criminal Investigation Command, and the Air Force’s Office of Special Investigations have coordinated information sharing by attending the OIG working group on SBIR and STTR fraud, waste, and abuse. Officials from all three investigative services in our review also told us that they have conducted, or are conducting, fraud investigations in conjunction with other federal agencies and are sharing information to do so, which meets the spirit of one of the OIG requirements. In addition, the Air Force’s Office of Special Investigations recently developed—and helped the Air Force SBIR and STTR program to launch—fraud, waste, and abuse training for all applicants. Naval Criminal Investigative Service staff also told us they provided SBIR fraud training to some Navy personnel who review the SBIR and STTR applications. In addition to the measures outlined above, the DOD OIG has consistently submitted annual reports to congressional committees detailing the number of SBIR and STTR fraud, waste, and abuse investigations over the past year, the costs of those investigations, and other related items. The division of duties between the military services’ OIGs and their respective investigative services makes it difficult to track DOD’s implementation of the requirements. The military services’ OIGs are responsible for implementing the fraud, waste, and abuse prevention requirements; however, the investigative services, which generally investigate fraud, typically conduct several of the activities included in the SBIR and STTR fraud, waste, and abuse prevention requirements. Without the three military services’ OIGs implementing the requirements themselves or delegating the implementation of the requirements to the investigative services, the DOD OIGs may not be able to detect fraud, waste, and abuse in DOD’s SBIR and STTR programs, which have the largest budgets for these programs. Participating agencies and their OIGs have taken steps to implement many of their respective requirements that are designed to help the agencies prevent fraud, waste, and abuse in their SBIR and STTR programs. However, we found that the agencies have varied in their implementation of program requirements, and we identified four areas in which SBA’s oversight or review of these requirements has been limited. First, SBA has not confirmed agencies’ implementation of the minimum fraud, waste, and abuse prevention requirements. Without confirming that participating agencies are implementing the minimum fraud, waste, and abuse prevention requirements in the policy directives by, for example, requesting documentation, SBA does not have reasonable assurance that each agency has a fraud, waste, and abuse prevention system in place to help reduce their vulnerability to fraud, waste, and abuse. Second, because SBA has not taken action since 2012 to review the requirements, it does not know whether they are effective or whether any revisions are needed. Given that we identified several areas in which agencies expressed confusion or implemented the requirements differently, SBA may need to more proactively solicit agency information. Third, because the fraud, waste, and abuse prevention requirement regarding “essentially equivalent work” is inconsistent with the definition earlier in the policy directives, SBA has no assurance that participating agencies are appropriately checking for such work that they fund as well as such work funded by other agencies. As a result, agencies may be at a higher risk of funding essentially equivalent work. Fourth, SBA has not evaluated the outcomes of the agencies’ implementation of the fraud, waste, and abuse prevention requirements and therefore does not have reasonable assurance that the requirements are necessary, appropriate, and meet the intended purpose of preventing fraud, waste, and abuse in the SBIR and STTR programs. Additionally, we found that HHS does not collect the life cycle certifications from award recipients to ensure that they are in compliance with specific program requirements. HHS is the only agency that does not collect copies of the certification forms. Without collecting copies of the certification forms, it may be more difficult to prosecute HHS SBIR or STTR awardees if they commit fraud, waste, or abuse. With regard to the OIGs, DOD is the only agency in our review with multiple OIG offices and also multiple investigative services. However, the three military services’ OIGs within DOD have not implemented all of the requirements or delegated them to the investigative services. The military services’ OIGs are responsible for implementing the requirements, but the requirements include activities that are typically undertaken by the investigative services within DOD. Without the three military services’ OIGs implementing the requirements themselves or delegating them to the investigative services, the DOD OIGs may not be able to detect fraud, waste, and abuse in DOD’s SBIR and STTR programs, which have the largest budgets for these programs. We are making six recommendations. To help improve agencies’ implementation of the fraud, waste, and abuse prevention requirements in the policy directives, we recommend that the Administrator of SBA take the following four actions: Confirm that each SBIR and STTR agency is implementing the minimum fraud, waste, and abuse prevention requirements in the policy directives, by, for example, requesting documentation from agencies. Request input from the participating agencies regarding the clarity of the requirements; review all of the SBIR and STTR minimum fraud, waste, and abuse prevention requirements, including the agency requirement to post information about successful SBIR or STTR fraud prosecutions; determine whether any additional guidance is needed; and revise the policy directives accordingly. Revise the fraud, waste, and abuse provisions in the policy directives to reflect the definition of essentially equivalent work used elsewhere in the policy directives and require participating agencies to check for essentially equivalent work that they fund as well as such work funded by other agencies. Evaluate SBIR and STTR agencies’ fraud, waste, and abuse outcomes to ensure the fraud, waste, and abuse prevention requirements are appropriate and meet their intended purpose for the SBIR and STTR programs. To help improve the implementation of the fraud, waste, and abuse prevention requirements, we recommend that the Secretary of HHS direct the HHS SBIR and STTR program offices to collect copies of the self- certification forms from its SBIR and STTR awardees. To help ensure that DOD is implementing the fraud, waste, and abuse prevention requirements to the OIGs, we recommend that the Inspectors General of the Army, Navy, and Air Force implement the requirements themselves or delegate the implementation of the requirements to the investigative services. We provided a draft of this report to SBA and the 11 participating agencies for review and comment. Four agencies—SBA, HHS, DOD, and Education—provided written comments, which are reproduced in appendixes IV through VII. Five agencies—SBA, Commerce, DOD, DOT, and NSF—provided technical comments, which we incorporated, as appropriate. Five agencies—DHS, DOE, EPA, NASA, and USDA—had no technical or written comments. DOD and SBA generally agreed with our recommendations, but HHS did not concur with the recommendation to collect the self-certifications from its SBIR and STTR awardees. Specifically, DOD stated that it concurred with our recommendation that the Army, Navy, and Air Force OIGs implement the fraud, waste, and abuse requirements themselves or delegate the implementation of the requirements to the investigative services, but provided no additional details. SBA generally agreed with GAO’s four recommendations, and noted that it will do more to ensure that agencies are implementing the fraud, waste, and abuse requirements. For the first recommendation, SBA stated that it will request that each participating agency confirm their implementation of the minimum fraud, waste, and abuse prevention requirements. For the second recommendation, SBA stated it will contact all agencies in writing to inquire if additional clarity is needed regarding any of the fraud, waste, and abuse requirements, and, if necessary, will provide additional guidance. For the third recommendation, SBA stated that it will take steps to revise the fraud, waste, and abuse prevention requirements in the SBIR and STTR policy directives to reflect the definition of essentially equivalent work as noted in section 3 of the policy directives. For the fourth recommendation, SBA stated that it will survey the participating agencies regarding whether the requirements are necessary and meeting their intended purposes, are placing undue burdens on the agencies, or need to be revised, updated, or eliminated. While SBA generally agreed with our recommendations, it also noted in its comments that it does not have the legislative authority to conduct full- scale audits and assessments of each participating agency's fraud, waste, and abuse outcomes in the SBIR and STTR programs. As we stated in this report, the Small Business Act requires SBA to independently survey and monitor the operation of the SBIR and STTR programs within the participating agencies. Further, the policy directives state that SBA is responsible for ensuring that each participating agency has taken steps to maintain a fraud, waste, and abuse prevention system to minimize their impact on the programs. We continue to believe that the Small Business Act and the policy directives require SBA to take an oversight role for the programs’ fraud, waste, and abuse prevention requirements. SBA’s written comments also state that each participating agency has its own OIG that has the authority, expertise, and skill necessary to analyze whether that particular agency is effectively implementing fraud, waste, and abuse measures in its SBIR or STIR program, and suggested that many of the responsibilities that GAO recommends to SBA could be more effectively and efficiently handled by the OIGs. We agree that the OIGs have a role in preventing and detecting fraud and abuse in their agencies’ programs and operations generally. However, as we found in this report, several agencies’ OIGs and investigative services noted that the SBIR and STTR programs are often a lower priority because the programs represent a relatively small amount of money compared to other programs that their agencies fund, and that they need to prioritize their investigations in areas where the agencies spend more money. Because the Small Business Act and SBIR and STTR policy directives include requirements for SBA to monitor the operation of the programs and ensure agencies have taken steps to maintain fraud, waste, and abuse prevention systems, we believe the recommendations that we are making to SBA are appropriate. In addition, SBA stated that none of the participating agencies had communicated ambiguity in or a misunderstanding of any fraud, waste, and abuse requirements, and thus SBA was unaware of the need to clarify these requirements. We agree that agencies had not raised issues about the requirements with SBA, and state this in the report. However, in interviews with agency program officials about the requirements during this review, we identified areas where the agencies were implementing the requirements differently and program officials confirmed that parts of the requirements were unclear. In some cases, program officials asked us how they were supposed to implement these requirements. Such questions and variation in the implementation show that there is ambiguity in the requirements or misunderstanding among agencies. In its comments, HHS raised three issues explaining why it did not concur with our recommendation. First, HHS stated that it has implemented the requirements in the policy directives for life cycle certifications. As noted in our report, HHS requires that awardees complete certifications and keep them on file and available for review. However, the fraud, waste, and abuse prevention requirements state that agencies are to require certifications from award recipients during the life cycle of the award. Further, the section of the policy directives that contains the certification templates states that the forms are to be submitted by the applicant to the contracting or granting agency. We do not believe that HHS has fully implemented this requirement because HHS does not require awardees to submit the certifications to HHS. Without collecting the life cycle certifications, HHS has no assurance that awardees have completed them. Second, HHS stated that the agency cannot accurately determine when certifications are due to collect them because its financial data is typically 45 days in arrears. Award recipients are required to complete life cycle certifications at three different financial milestones: when receiving the final payment for a Phase I award, prior to receiving 50 percent of the total amount for a Phase II award, and prior to the final payment for a Phase II award. There is no specific timeframe for agencies to collect the certifications, and we see no issue with collecting them within 45 days of these financial milestones. HHS also stated that requiring grantees to submit certifications would create a substantial administrative burden. HHS does not explain, however, why if the recipient has to fill out the certification form in any case, submitting the form is likely to be a significant additional administrative burden. Also, as noted in the report, all of the participating agencies—with the exception of HHS—collect the certifications from awardees, and none mentioned to us that it created a significant administrative burden. Third, HHS stated that grant fraud cases, including those for the SBIR or STTR programs, have been successfully prosecuted without grantees proactively submitting life cycle certifications. However, as stated in this report, HHS OIG officials told us that they raised concerns to HHS SBIR staff about the practice of allowing awardees to maintain the certifications at their small businesses instead of submitting them to the agency. In addition, OIG officials from most of the participating agencies told us that they use these certifications to show the intent to commit fraud, if any fraud occurs later, because the small businesses are certifying the accuracy and truth of the information they are submitting to the agency. We continue to believe that taking steps to collect the certifications from SBIR and STTR awardees would bring HHS into full compliance with this requirement, and would provide HHS with better assurance that the awardees understand and agree to the terms of the contract. We therefore continue to believe that it is important for HHS to collect the signed life cycle certification forms from small businesses and retained the recommendation. We are sending copies of this report to the appropriate congressional committees; the Secretaries of Agriculture, Commerce, Defense, Education, Energy, Health and Human Services, Homeland Security, and Transportation; the Administrators of the Small Business Administration, the Environmental Protection Agency, and the National Aeronautics and Space Administration; the Director of the National Science Foundation; the Inspectors General of the Army, Air Force, and Navy; and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or NeumannJ@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VIII. This report examines the extent to which (1) participating agencies and the Small Business Administration (SBA) have implemented fraud, waste, and abuse prevention requirements in the Small Business Innovation Research (SBIR) and Small Business Technology Transfer (STTR) programs and (2) OIGs have implemented fraud, waste, and abuse prevention requirements in the SBIR and STTR programs. To answer our first objective, we reviewed relevant laws and directives, including the reauthorization act for the SBIR and STTR programs (reauthorization act) and the SBIR and STTR policy directives, and our prior report on SBIR and STTR fraud, waste, and abuse issues. We requested and reviewed documentation from the 11 participating agencies on their actions to address fraud, waste, and abuse and to implement the requirements and compared this information to the requirements in the policy directives. We focused on steps taken since 2012 because that was the first year that the fraud, waste, and abuse prevention requirements for agencies contained in the policy directives went into effect. We assessed agencies’ implementation of the requirements but not the effectiveness of their implementation. We determined that an agency had fully implemented a requirement if the agency could demonstrate, through interviews with agency officials and documentation provided by them, as appropriate, that it had fully implemented each part of the requirement. We determined that an agency partially implemented a requirement if the agency could demonstrate, through interviews with agency officials and documentation provided by them, as appropriate, that it had at least implemented one part of the requirement. We determined that an agency had not implemented the requirement if it could not provide evidence for implementing any part of that requirement. We limited our review of the Departments of Defense (DOD) and Health and Human Services (HHS) to the components that spent more than $100 million on SBIR and STTR in fiscal year 2014, the last year that we had data at the start of our review. For DOD, this includes the three military departments—Army, Air Force, and Navy—that comprise 70 percent of DOD’s SBIR and STTR programs. We also included the Office of the Secretary of Defense, which oversees the SBIR and STTR programs for all DOD components. For HHS, our review includes the National Institutes of Health, which comprises 98 percent of HHS’s SBIR and STTR programs. Further, we requested and reviewed information from SBA to determine the actions SBA has taken to oversee the agencies’ efforts related to fraud, waste, and abuse. In addition, we interviewed program managers from SBA and the participating agencies regarding their actions to address fraud, waste, and abuse in the programs using a standard set of interview questions. To characterize views throughout the report, we defined modifiers to quantify officials’ views as follows: “Most” agencies represents 7 or more agencies’ officials. “Several” agencies represents 5 to 6 agencies’ officials. “Some” agencies represents 3 to 4 agencies’ officials To address our second objective, we reviewed the requirements for fraud, waste, and abuse for the participating agencies’ OIGs in the reauthorization act. We requested and reviewed documentation from the 11 participating agencies’ OIGs on their actions to address the fraud, waste, and abuse prevention requirements and compared this information to the requirements in the reauthorization act. We focused on steps taken since 2012 because that was the first full year that the fraud, waste, and abuse prevention requirements for OIGs went into effect. We assessed OIGs’ implementation of the requirements but not the effectiveness of their implementation. We determined that an OIG had fully implemented a requirement if it could provide documentation that it had fully implemented each part of the requirement. We determined that an OIG partially implemented the requirement if it could provide documentation that had at least implemented one part of the requirement. We determined that an OIG had not implemented the requirement if it could not provide evidence for implementing any part of that requirement. In addition, we interviewed SBA and participating agencies’ OIG and investigative staff who have worked on SBIR and STTR reviews and investigations regarding their actions to address fraud, waste, and abuse in the programs, using a standard set of interview questions. To characterize views throughout the report, we defined modifiers to quantify officials’ views as follows: “Most” OIGs represents 7 or more OIGs’ officials. “Several” OIGs represents 5 to 6 OIG’s officials. “Some” OIGs represents 3 to 4 OIGs’ officials To help inform both objectives, we conducted interviews with a non- generalizable sample of 11 small businesses to obtain information on their experiences with, and knowledge of, fraud, waste, and abuse prevention activities by agencies and the challenges the small businesses faced regarding the SBIR and STTR fraud, waste, and abuse prevention requirements. To select these 11 small businesses, we used publicly available data on SBIR and STTR awardees available on SBA’s SBIR and STTR program website (www.SBIR.gov). We downloaded a list of all of the SBIR or STTR awards by any agency in 2012 through 2016. We selected 2012 as the beginning year because it was the year that agencies started implementing the new fraud, waste, and abuse prevention provisions required by the policy directives for the programs. We identified the small businesses that had received awards from at least 3 agencies and prepared a spreadsheet for each agency that listed all of the small businesses that had received at least three awards, including at least one from that agency. We focused on small businesses that had received SBIR or STTR awards from at least 3 agencies since 2012 to maximize the number of agencies with which the small businesses had experience. We used a random number generator to assign random numbers to each of the small businesses on each agency list and sorted the lists by the randomly generated numbers. We contacted the first small business on each agency’s list to request an interview. If we did not receive a response from the small business after three attempts, we contacted the next small business on the list until we received a response. The information collected from these small businesses is anecdotal and cannot be generalized to all small businesses that receive SBIR or STTR awards, but provide illustrative examples of small businesses’ experiences with the fraud, waste, and abuse prevention requirements. We conducted this performance audit from April 2016 to April 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The 2011 reauthorization of the Small Business Innovation Research (SBIR) and Small Business Technology Transfer (STTR) programs (reauthorization act) required the Small Business Administration (SBA) to add fraud, waste, and abuse prevention requirements to the policy directives for agencies to implement. The policy directives for each program contain the same requirements. Table 1 contains our assessment of each agency’s implementation of the requirements as of December 2016 based on documentation provided by the agencies and interviews with agency officials. Further, we assessed agencies’ implementation of the requirements but not the effectiveness of their implementation. We focused our review on whether the agency had taken any action to implement the requirement. The full text of the requirements, as they appear in the Policy Directives for the programs, is as follows: At a minimum, agencies must: Require certifications from award recipients that they are in compliance with specific program requirements at the time of the award, as well as after the award and during the life cycle of the funding agreement. Include information explaining how an individual can report fraud, waste, and abuse on the agency’s respective program website and in each funding solicitation using the method provided by the agency’s Office of the Inspector General (OIG), such as publishing the agency’s fraud hotline number. Designate at least one individual in the agency to, at a minimum, serve as the liaison for the SBIR or STTR program, the OIG, and the agency’s Suspension and Debarment Official and ensure that inquiries regarding fraud, waste, and abuse are referred to the appropriate office. Include on its program website information concerning successful prosecutions of fraud, waste, and abuse in the programs. Establish a written policy requiring all personnel involved with the program to notify the OIG if anyone suspects fraud, waste, and/or abuse and ensure the policy is communicated to all personnel. Create or ensure there is an adequate system to enforce accountability by developing separate standardized templates for referrals to the OIG and the suspension and debarment official, as well as a process for tracking such referrals. Ensure compliance with program eligibility requirements and terms of funding agreements. Work with the agency’s OIG in its efforts to establish fraud detection indicators; coordinate sharing of information on fraud, waste, and abuse between federal agencies; and improve education and training to program officials, applicants, and award recipients. Develop policies and procedures to avoid funding essentially equivalent work already funded by another agency. Consider enhanced reporting requirements during the funding agreement. We limited our review of the Departments of Defense (DOD) and Health and Human Services (HHS) to the components that spent more than $100 million on the SBIR and STTR programs in fiscal year 2014, the last year for which we had data at the start of our review. As such, we limited our review of DOD’s efforts to the three military departments—Army, Air Force and Navy—that comprise 70 percent of DOD’s SBIR and STTR programs. We also included the Office of the Secretary of Defense, which oversees the SBIR and STTR programs for all DOD components. Additionally, we limited our review of HHS’s program to the National Institutes of Health, which comprises 98 percent of HHS’s SBIR and STTR programs. Partially implemented requirement Did Not implement requirement Our assessment of the SBIR and STTR fraud, waste, and abuse requirements at DOD included three DOD components in our review—Army, Air Force, and Navy. Our assessment of the SBIR and STTR fraud, waste, and abuse requirements at HHS included one HHS component—the National Institutes of Health. NASA is receiving partial credit for the required certifications from awardees because NASA is not using the exact language in the self certifications from the small businesses, as required in the policy directive. However, NASA officials told us that they have updated the language for the self- certifications based on lessons learned from prosecuting SBIR fraud cases, which is why NASA is using different language for the certifications than that provided in the policy directives. NSF is receiving partial credit for the required templates for the OIG and the Suspension and Debarment Official because NSF agency officials do not have a template for making referrals to the Suspension and Debarment Official. NSF OIG officials provided information on how they make referrals to the Suspension and Debarment Official, but because this requirement is directed at the agency staff and calls specifically for a template, NSF received partial credit for this requirement. The 2011 reauthorization of the Small Business Innovation Research (SBIR) and Small Business Technology Transfer (STTR) programs (reauthorization act) included five fraud, waste, and abuse prevention requirements for the participating Offices of the Inspectors General (OIG) to implement. The table below contains our assessment of each OIG’s implementation of the requirements as of December 2016 based on documentation provided by the OIGs and interviews with OIG officials. Further, we assessed the OIGs’ implementation of the requirements but not the effectiveness of their implementation. We focused our review on whether the agency OIG had taken any action to implement the requirement. The full text of the requirements, as they appear in the statute reauthorizing the programs, is as follows: The Inspector General of each Federal agency that participates in the SBIR program or STTR program shall cooperate to prevent fraud, waste, and abuse in the SBIR program and the STTR program by: Establishing fraud detection indicators. Reviewing regulations and operating procedures of the Federal agency. Coordinating information sharing between Federal agencies, to the extent otherwise permitted under federal law. The DOD OIGs have overall responsibility for implementing these requirements under the reauthorization act. However, the responsibilities for investigating fraud, which are typically found within the other participating agencies’ OIGs, are separate for DOD and are located instead in specific investigative services. We found that the three DOD investigative services in our review have implemented several of the SBIR and STTR fraud, waste, and abuse requirements, as noted in our report. However, the table above reflects only the work that the DOD OIG has done to implement the requirements. We found that the DOD component OIGs had not done anything to implement these requirements. In addition to the contact named above, Hilary Benedict (Assistant Director), Matt Ambrose, Lisa Brown, Greg Campbell, Antoinette Capaccio, Justin Fisher, Cindy Gilbert, Kirsten Jacobson, TyAnn Lee, Rebecca Makar, and Sara Sullivan made key contributions to this report. | For about 35 years, federal agencies have made awards to small businesses for technology research and development through the SBIR program and, for the last 25 years, through the STTR program. Following a 2009 congressional hearing about fraud in the programs, the SBIR/STTR Reauthorization Act of 2011 included separate requirements for SBA and OIGs to address and prevent fraud, waste, and abuse. The act also included a provision for GAO to review what the agencies and their OIGs have done to address fraud, waste, and abuse in the programs. This report examines (1) the extent to which SBA and the participating agencies have implemented measures to prevent fraud, waste, and abuse for the SBIR and STTR programs and (2) the extent to which the agencies' OIGs have implemented the act's requirements. GAO compared documentation from SBA, the 11 participating agencies, and the agencies' OIGs to their respective requirements and interviewed SBA, agency, and OIG officials. The 11 agencies participating in the Small Business Innovation Research (SBIR) and Small Business Technology Transfer (STTR) programs have varied in their implementation of the fraud, waste, and abuse prevention requirements developed by the Small Business Administration (SBA) after the programs were reauthorized in 2011. SBA, which oversees the programs, amended the SBIR and STTR policy directives in 2012, as required by the reauthorization act, to include 10 minimum requirements to help agencies prevent potential fraud, waste, and abuse in the programs. GAO found that the extent to which the agencies have fully implemented each of the requirements in the directives varies. For example, all 11 agencies have fully implemented 2 requirements, more than half of the agencies have fully implemented 6 other requirements, and 1 and 3 agencies, respectively, have fully implemented the remaining 2 requirements. Officials from 9 agencies told GAO they have implemented other activities beyond the minimum requirements included in the directives, such as conducting site visits to small businesses to confirm that the necessary facilities exist for technical research and development work. Although SBA issued the policy directives, it has taken few actions to oversee agencies' efforts to implement the requirements. SBA officials said they checked on the implementation of one of the requirements, but did not know whether the participating agencies were implementing the other requirements because they had not confirmed this information. Without confirming that each participating agency is implementing the fraud, waste, and abuse prevention requirements in the policy directives, SBA does not have reasonable assurance that each agency has a system in place to reduce its' vulnerability to fraud, waste, and abuse. Similarly, Offices of Inspectors General (OIG) varied in their implementation of the fraud, waste, and abuse prevention requirements specifically assigned to them in the reauthorization act, with between 5 and 11 OIGs implementing each requirement. For example, OIGs at each of the 11 agencies have shared information on fraud, waste, and abuse. Of the 11 participating agencies, the Department of Defense (DOD) is the only one whose oversight and audit responsibilities are separated between its various OIGs and specific investigative services, so that DOD has both an OIG as well as an investigative service as do each of the military services. By law, the OIGs of each military service within DOD—Army, Navy, and Air Force—are each required to implement these requirements. However, GAO found that none of the three military service OIGs had taken actions to implement the requirements, although the DOD OIG had taken some steps to implement them. The division of duties between the military services' OIGs and their respective investigative services makes it difficult to track the implementation of these requirements at DOD. Without the three military services' OIGs implementing the requirements themselves or delegating the implementation of the requirements to the investigative services, the DOD OIGs may not be able to detect fraud, waste, and abuse in DOD's SBIR and STTR programs, which have the largest budgets for these programs. GAO is making six recommendations, including that SBA confirm agency implementation of the fraud, waste, and abuse requirements, and that the Army, Navy, and Air Force OIGs implement the OIG requirements or delegate them to the investigative services. These agencies generally agreed with the recommendations addressed to them. |
The federal government is likely to invest more than $89 billion on IT in fiscal year 2017. However, as we have previously reported, investments in federal IT too often result in failed projects that incur cost overruns and schedule slippages, while contributing little to the desired mission-related outcomes. For example: The Department of Veterans Affairs’ Scheduling Replacement Project was terminated in September 2009 after spending an estimated $127 million over 9 years. The tri-agency National Polar-orbiting Operational Environmental Satellite System was stopped in February 2010 by the White House’s Office of Science and Technology Policy after the program spent 16 years and almost $5 billion. The Department of Homeland Security’s Secure Border Initiative Network program was ended in January 2011, after the department obligated more than $1 billion to the program, because it did not meet cost-effectiveness and viability standards. The Office of Personnel Management’s Retirement Systems Modernization program was canceled in February 2011, after spending approximately $231 million on the agency’s third attempt to automate the processing of federal employee retirement claims. The Department of Veterans Affairs’ Financial and Logistics Integrated Technology Enterprise program was intended to be delivered by 2014 at a total estimated cost of $609 million, but was terminated in October 2011 due to challenges in managing the program. The Department of Defense’s Expeditionary Combat Support System was canceled in December 2012 after spending more than a billion dollars and failing to deploy within 5 years of initially obligating funds. These and other failed IT projects often suffered from a lack of disciplined and effective management, such as project planning, requirements definition, and program oversight and governance. In many instances, agencies had not consistently applied best practices that are critical to successfully acquiring IT investments. Federal IT projects have also failed due to a lack of oversight and governance. Executive-level governance and oversight across the government has often been ineffective, specifically from chief information officers (CIO). For example, we have reported that not all CIOs had the authority to review and approve the entire agency IT portfolio and that CIOs’ authority was limited. Recognizing the severity of issues related to government-wide management of IT, FITARA was enacted in December 2014. The law was intended to improve agencies’ acquisitions of IT and enable Congress to monitor agencies’ progress and hold them accountable for reducing duplication and achieving cost savings. FITARA includes specific requirements related to seven areas. Federal data center consolidation initiative (FDCCI). Agencies are required to provide OMB with a data center inventory, a strategy for consolidating and optimizing the data centers (to include planned cost savings), and quarterly updates on progress made. The law also requires OMB to develop a goal for how much is to be saved through this initiative, and provide annual reports on cost savings achieved. Enhanced transparency and improved risk management. OMB and agencies are to make detailed information on federal IT investments publicly available, and agency CIOs are to categorize their IT investments by level of risk. Additionally, in the case of major IT investments rated as high risk for 4 consecutive quarters, the law requires that the agency CIO and the investment’s program manager conduct a review aimed at identifying and addressing the causes of the risk. Agency CIO authority enhancements. Agency CIOs are required to (1) approve the IT budget requests of their respective agencies, (2) certify that OMB’s incremental development guidance is being adequately implemented for IT investments, (3) review and approve contracts for IT, and (4) approve the appointment of other agency employees with the title of CIO. Portfolio review. Agencies are to annually review IT investment portfolios in order to, among other things, increase efficiency and effectiveness and identify potential waste and duplication. In establishing the process associated with such portfolio reviews, the law requires OMB to develop standardized performance metrics, to include cost savings, and to submit quarterly reports to Congress on cost savings. Expansion of training and use of IT acquisition cadres. Agencies are to update their acquisition human capital plans to address supporting the timely and effective acquisition of IT. In doing so, the law calls for agencies to consider, among other things, establishing IT acquisition cadres or developing agreements with other agencies that have such cadres. Government-wide software purchasing program. The General Services Administration is to develop a strategic sourcing initiative to enhance government-wide acquisition and management of software. In doing so, the law requires that, to the maximum extent practicable, the General Services Administration should allow for the purchase of a software license agreement that is available for use by all executive branch agencies as a single user. Maximizing the benefit of the federal strategic sourcing initiative. Federal agencies are required to compare their purchases of services and supplies to what is offered under the federal strategic sourcing initiative. OMB is also required to issue related regulations. In June 2015, OMB released guidance describing how agencies are to implement FITARA. OMB’s guidance is intended to, among other things: assist agencies in aligning their IT resources with statutory establish government-wide IT management controls that will meet the law’s requirements, while providing agencies with flexibility to adapt to unique agency processes and requirements; clarify the CIO’s role and strengthen the relationship between agency CIOs and bureau CIOs; and strengthen CIO accountability for IT cost, schedule, performance, and security. The guidance identified several actions that agencies were to take to establish a basic set of roles and responsibilities (referred to as the common baseline) for CIOs and other senior agency officials, which are needed to implement the authorities described in the law. For example, agencies were required to conduct a self-assessment and submit a plan describing the changes they intended to make to ensure that common baseline responsibilities are implemented. Agencies were to submit their plans to OMB’s Office of E-Government and Information Technology by August 15, 2015, and make portions of the plans publicly available on agency websites no later than 30 days after OMB approval. As of November 2016, all agencies had made their plans publicly available. In addition, in August 2016, OMB released guidance intended to, among other things, define a framework for achieving the data center consolidation and optimization requirements of FITARA. The guidance includes requirements for agencies to: maintain complete inventories of all data center facilities owned, operated, or maintained by or on behalf of the agency; develop cost savings targets due to consolidation and optimization for fiscal years 2016 through 2018 and report any actual realized cost savings; and measure progress toward meeting optimization metrics on a quarterly basis. The guidance also directs agencies to develop a data center consolidation and optimization strategic plan that defines the agency’s data center strategy for fiscal years 2016, 2017, and 2018. This strategy is to include, among other things, a statement from the agency CIO stating whether the agency has complied with all data center reporting requirements in FITARA. Further, the guidance indicates that OMB is to maintain a public dashboard that will display consolidation-related costs savings and optimization performance information for the agencies. In February 2015, we introduced a new government-wide high-risk area, Improving the Management of IT Acquisitions and Operations. This area highlights several critical IT initiatives in need of additional congressional oversight, including (1) reviews of troubled projects; (2) efforts to increase the use of incremental development; (3) efforts to provide transparency relative to the cost, schedule, and risk levels for major IT investments; (4) reviews of agencies’ operational investments; (5) data center consolidation; and (6) efforts to streamline agencies’ portfolios of IT investments. We noted that implementation of these initiatives has been inconsistent and more work remains to demonstrate progress in achieving IT acquisitions and operations outcomes. Further, in our February 2015 high-risk report, we identified actions that OMB and the agencies need to take to make progress in this area. These include implementing FITARA, as well as implementing at least 80 percent of our recommendations related to the management of IT acquisitions and operations within 4 years. As noted in that report, we made multiple recommendations to improve agencies’ management of IT acquisitions and operations, many of which are discussed later in this statement. Specifically, between fiscal years 2010 and 2015, we made 803 recommendations to OMB and federal agencies to address shortcomings in IT acquisitions and operations, including many to improve the implementation of the recent initiatives and other government-wide, cross-cutting efforts. As of October 2016, OMB and the agencies had fully implemented about 46 percent of these recommendations. This is a 23 percent increase compared to the percentage we reported as being fully implemented in 2015. Figure 1 summarizes the progress that OMB and the agencies have made in addressing our recommendations, as compared to the 80 percent target. In addition, in fiscal year 2016, we made 202 new recommendations, thus further reinforcing the need for OMB and agencies to address the shortcomings in IT acquisitions and operations. Agencies have taken steps to improve the management of IT acquisitions and operations by implementing key FITARA initiatives. However, agencies would be better positioned to fully implement the law, and thus realize additional management improvements, if they addressed the numerous recommendations we have made aimed at improving data center consolidation, increasing transparency via OMB’s IT Dashboard, and incremental development. One of the key initiatives to implement FITARA is data center consolidation. OMB established FDCCI in February 2010 to improve the efficiency, performance, and environmental footprint of federal data center activities. In a series of reports over the past 5 years, we determined that while data center consolidation could potentially save the federal government billions of dollars, weaknesses existed in several areas, including agencies’ data center consolidation plans and OMB’s tracking and reporting on cost savings. In total, we have made 111 recommendations to OMB and agencies to improve the execution and oversight of the initiative. Most agencies agreed with our recommendations or had no comments. In March 2016, we reported that the 24 agencies participating in FDCCI collectively had made progress on their data center closure efforts. Specifically, as of November 2015, these agencies had identified a total of 10,584 data centers, of which they reported closing 3,125 through fiscal year 2015. Notably, the Departments of Agriculture, Defense, the Interior, and the Treasury accounted for 84 percent of these total closures. Further, the agencies have reported that they are planning to close additional data centers by the end of fiscal year 2019. In addition, we noted that 19 of the 24 agencies had reported achieving an estimated $2.8 billion in cost savings and avoidances from their data center consolidation and optimization efforts from fiscal years 2011 through 2015. The Departments of Commerce, Defense, Homeland Security, and the Treasury accounted for about $2.4 billion (or about 86 percent) of the total. Further, 21 agencies collectively reported planning an additional $5.4 billion in cost savings and avoidances, for a total of approximately $8.2 billion, through fiscal year 2019. Figure 2 summarizes agencies’ reported achieved and planned cost savings and avoidances from fiscal years 2011 through 2019. To better ensure that federal data center consolidation and optimization efforts improve governmental efficiency and achieve cost savings, we recommended that 10 of the 24 agencies take actions to complete their planned data center cost savings and avoidance targets for fiscal years 2016 through 2018. We also recommended that 22 of the 24 agencies take actions to improve optimization progress, including addressing any identified challenges. Fourteen agencies agreed with our recommendations, 4 did not state whether they agreed or disagreed, and 6 stated that they had no comments. To facilitate transparency across the government in acquiring and managing IT investments, OMB established a public website—the IT Dashboard—to provide detailed information on major investments at 26 agencies, including ratings of their performance against cost and schedule targets. Among other things, agencies are to submit ratings from their CIOs, which, according to OMB’s instructions, should reflect the level of risk facing an investment relative to that investment’s ability to accomplish its goals. In this regard, FITARA includes a requirement for CIO’s to categorize their major IT investment risks in accordance with OMB guidance. Over the past 6 years, we have issued a series of reports about the IT Dashboard that noted both significant steps OMB has taken to enhance the oversight, transparency, and accountability of federal IT investments by creating its IT Dashboard, as well as issues with the accuracy and reliability of data. In total, we have made 47 recommendations to OMB and federal agencies to help improve the accuracy and reliability of the information on the IT Dashboard and to increase its availability. Most agencies agreed with our recommendations or had no comments. Most recently, in June 2016, we determined that agencies had not fully considered risks when rating their major investments on the IT Dashboard. Specifically, our assessments of risk for 95 investments at 15 selected agencies matched the CIO ratings posted on the Dashboard 22 times, showed more risk 60 times, and showed less risk 13 times. Figure 3 summarizes how our assessments compared to the selected investments’ CIO ratings. Aside from the inherently judgmental nature of risk ratings, we identified three factors which contributed to differences between our assessments and the CIO ratings: Forty of the 95 CIO ratings were not updated during the month we reviewed, which led to more differences between our assessments and the CIOs’ ratings. This underscores the importance of frequent rating updates, which help to ensure that the information on the Dashboard is timely and accurately reflects recent changes to investment status. Three agencies’ rating processes spanned longer than 1 month. Longer processes mean that CIO ratings are based on older data, and may not reflect the current level of investment risk. Seven agencies’ rating processes did not focus on active risks. According to OMB’s guidance, CIO ratings should reflect the CIO’s assessment of the risk and the investment’s ability to accomplish its goals. CIO ratings that do no incorporate active risks increase the chance that ratings overstate the likelihood of investment success. As a result, we concluded that the associated risk rating processes used by the agencies were generally understating the level of an investment’s risk, raising the likelihood that critical federal investments in IT are not receiving the appropriate levels of oversight. To better ensure that the Dashboard ratings more accurately reflect risk, we recommended that the 15 agencies take actions to improve the quality and frequency of their CIO ratings. Twelve agencies generally agreed with or did not comment on the recommendations and three agencies disagreed. OMB has emphasized the need to deliver investments in smaller parts, or increments, in order to reduce risk, deliver capabilities more quickly, and facilitate the adoption of emerging technologies. In 2010, it called for agencies’ major investments to deliver functionality every 12 months and, since 2012, every 6 months. Subsequently, FITARA codified a requirement that agency CIOs certify that IT investments are adequately implementing OMB’s incremental development guidance. In May 2014, we reported that 66 of 89 selected investments at five major agencies did not plan to deliver capabilities in 6-month cycles, and less than half of these investments planned to deliver functionality in 12-month cycles. We also reported that only one of the five agencies had complete incremental development policies. Accordingly, we recommended that OMB develop and issue clearer guidance on incremental development and that the selected agencies update and implement their associated policies. Four of the six agencies agreed with our recommendations or had no comments; the remaining two agencies partially agreed or disagreed with the recommendations. More recently, in August 2016, we reported that agencies had not fully implemented incremental development practices for their software development projects. Specifically, we noted that, as of August 31, 2015, 22 federal agencies had reported on the IT Dashboard that 300 of 469 active software development projects (approximately 64 percent) were planning to deliver usable functionality every 6 months for fiscal year 2016, as required by OMB guidance. Regarding the remaining 169 projects (or 36 percent) that were reported as not planning to deliver functionality every 6 months, agencies provided a variety of explanations for not achieving that goal. These included project complexity, the lack of an established project release schedule, or that the project was not a software development project. Table 1 lists the total number and percent of federal software development projects for which agencies reported plans to deliver functionality every 6 months for fiscal year 2016. In reviewing seven selected agencies’ software development projects, we determined that 45 percent of the projects delivered functionality every 6 months for fiscal year 2015 and 55 percent planned to do so in fiscal year 2016. However, significant differences existed between the delivery rates that the agencies reported to us and what they reported on the IT Dashboard. For example, in four cases (Commerce, Education, HHS, and Treasury), the percentage of delivery reported to us was at least 10 percentage points lower than what was reported on the IT Dashboard. These differences were due to (1) our identification of fewer software development projects than agencies reported on the IT Dashboard and (2) the fact that information reported to us was generally more current than the information reported on the IT Dashboard. Figure 4 compares the software development projects’ percentage of planned delivery every 6 months reported on the IT Dashboard and to us. We concluded that by not having on the IT Dashboard up-to-date information about whether the project is a software development project and the extent to which projects are delivering functionality, these seven agencies were at risk that OMB and key stakeholders may make decisions regarding the agencies’ investments without the most current and accurate information. Finally, while OMB has issued guidance requiring agency CIOs to certify that each major IT investment’s plan for the current year adequately implements incremental development, only three agencies (the Departments of Commerce, Homeland Security, and Transportation) had defined processes and policies intended to ensure that the department CIO certifies that major IT investments are adequately implementing incremental development. Officials from three other agencies (the Departments of Education, Health and Human Services, and the Treasury) reported that they were in the process of updating their existing incremental development policy to address certification, while the Department of Defense’s policies that address incremental development did not include information on CIO certification. We concluded that until all of the agencies we reviewed define processes and policies for the certification of the adequate use of incremental development, they will not be able to fully ensure adequate implementation of, or benefit from, incremental development practices, as required by FITARA. To improve the reporting of incremental data on the IT Dashboard and policies for CIO certification of adequate incremental development, we made 12 recommendations to seven agencies and OMB. Five agencies agreed with our recommendations. In addition, the Department of Defense partially agreed with one recommendation and disagreed with another, OMB did not agree or disagree, and the Department of the Treasury did not comment on the recommendations. In summary, with the enactment of FITARA, the federal government has an opportunity to improve the transparency and management of IT acquisitions and operations, and to strengthen the authority of CIOs to provide needed direction and oversight. To their credit, agencies have taken steps to improve the management of IT acquisitions and operations by implementing key FITARA initiatives, including data center consolidation, efforts to increase transparency via OMB’s IT Dashboard, and incremental development; and they have continued to address recommendations we have made over the past several years. However, additional improvements are needed, and further efforts by OMB and federal agencies to implement our previous recommendations would better position them to fully implement FITARA. To help ensure that these efforts succeed, continued congressional oversight of OMB’s and agencies’ implementation of FITARA is essential. In addition, we will continue to monitor agencies implementation of our previous recommendations. Chairmen Meadows and Hurd, Ranking Members Connolly and Kelly, and Members of the Subcommittees, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. If you or your staffs have any questions about this testimony, please contact me at (202) 512-9286 or at pownerd@gao.gov. Individuals who made key contributions to this testimony are Kevin Walsh (Assistant Director), Chris Businsky, Rebecca Eyler, and Bradley Roach (Analyst in Charge). Information Technology Reform: Agencies Need to Increase Their Use of Incremental Development Practices, GAO-16-469. Washington, D.C.: August 16, 2016. IT Dashboard: Agencies Need to Fully Consider Risks When Rating Their Major Investments, GAO-16-494. Washington, D.C.: June 2, 2016. Data Center Consolidation: Agencies Making Progress, but Planned Savings Goals Need to Be Established . GAO-16-323. Washington, D.C.: March 3, 2016. High-Risk Series: An Update. GAO-15-290. Washington, D.C.: February 11, 2015. Data Center Consolidation: Reporting Can Be Improved to Reflect Substantial Planned Savings. GAO-14-713. Washington, D.C.: September 25, 2014. Information Technology: Agencies Need to Establish and Implement Incremental Development Policies. GAO-14-361. Washington, D.C.: May 1, 2014. IT Dashboard: Agencies Are Managing Investment Risk, but Related Ratings Need to Be More Accurate and Available. GAO-14-64. Washington, D.C.: December 12, 2013. Data Center Consolidation: Strengthened Oversight Needed to Achieve Cost Savings Goal. GAO-13-378. Washington, D.C.: April 23, 2013. Information Technology Dashboard: Opportunities Exist to Improve Transparency and Oversight of Investment Risk at Select Agencies. GAO-13-98. Washington, D.C.: October 16, 2012. Data Center Consolidation: Agencies Making Progress on Efforts, but Inventories and Plans Need to Be Completed. GAO-12-742. Washington, D.C.: July 19, 2012. IT Dashboard: Accuracy Has Improved, and Additional Efforts Are Under Way to Better Inform Decision Making. GAO-12-210. Washington, D.C.: November 7, 2011. Data Center Consolidation: Agencies Need to Complete Inventories and Plans to Achieve Expected Savings. GAO-11-565. Washington, D.C.: July 19, 2011. Federal Chief Information Officers: Opportunities Exist to Improve Role in Information Technology Management. GAO-11-634. Washington, D.C.: September 15, 2011. Information Technology: OMB Has Made Improvements to Its Dashboard, but Further Work Is Needed by Agencies and OMB to Ensure Data Accuracy. GAO-11-262. Washington, D.C.: March 15, 2011. Information Technology: OMB’s Dashboard Has Increased Transparency and Oversight, but Improvements Needed. GAO-10-701. Washington, D.C.: July 16, 2010. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The federal government is likely to invest more than $89 billion on IT in fiscal year 2017. Historically, these investments have frequently failed, incurred cost overruns and schedule slippages, or contributed little to mission-related outcomes. Accordingly, in December 2014, IT reform legislation was enacted, aimed at improving agencies' acquisitions of IT. Further, in February 2015, GAO added improving the management of IT acquisitions and operations to its high-risk list. Between fiscal years 2010 and 2015, GAO made about 800 recommendations related to this high-risk area to OMB and agencies. This statement summarizes agencies' progress in improving the management of IT acquisitions and operations. To do so, we reviewed and summarized GAO's prior and recently published work on (1) data center consolidation, (2) risk levels of major investments as reported on OMB's IT Dashboard, and (3) implementation of incremental development practices. Consolidating data centers. In an effort to reduce the growing number of data centers, OMB launched a consolidation initiative in 2010. GAO reported in March 2016 that agencies had closed 3,125 of the 10,584 total data centers and achieved $2.8 billion in cost savings and avoidances through fiscal year 2015. Agencies are planning a total of about $8.2 billion in savings and avoidances through fiscal year 2019. GAO recommended that the agencies take actions to meet their cost savings targets and improve optimization progress related to their data center consolidation and optimization efforts. Most agencies agreed with the recommendations or had no comment. Enhancing transparency. OMB's IT Dashboard provides detailed information on major investments at federal agencies, including ratings from Chief Information Officers (CIO) that should reflect the level of risk facing an investment. GAO reported in June 2016 that agencies had not fully considered risks when rating their major investments on the IT Dashboard. In particular, of the 95 investments reviewed, GAO's assessments of risks matched the CIO ratings 22 times, showed more risk 60 times, and showed less risk 13 times. Several issues contributed to these differences, such as CIO ratings not being updated frequently. GAO recommended that agencies improve the quality and frequency of their ratings. Most agencies generally agreed with or did not comment on the recommendations. Implementing incremental development. A key reform initiated by OMB has emphasized the need for federal agencies to deliver investments in smaller parts, or increments, in order to reduce risk and deliver capabilities more quickly. Since 2012, OMB has required investments to deliver functionality every 6 months. In August 2016, GAO reported that 22 agencies had reported that 64 percent of 469 active software development projects planned to deliver usable functionality every 6 months for fiscal year 2016. Further, for 7 selected agencies, GAO identified significant differences in the percentages of software projects reported to GAO as delivering functionality every 6 months, compared to what was reported on the IT Dashboard. This was due to, among other things, inconsistencies in agencies' reporting on non-software development projects, and the timing of reporting data. GAO made 12 recommendations to 7 agencies and OMB to improve the reporting of incremental data on the IT Dashboard and the policies for CIO certification of adequate incremental development. Most agencies agreed or did not comment on our recommendations, and OMB did not agree or disagree. GAO has previously made numerous recommendations to OMB and federal agencies to improve the oversight and execution of the data center consolidation initiative, the accuracy and reliability of the IT Dashboard, and incremental development policies. Most agencies agreed with GAO's recommendations or had no comments. GAO will continue to monitor agencies' implementation of these recommendations. |
An effective military medical surveillance system needs to collect reliable information on (1) the health care provided to service members before, during, and after deployment; (2) where and when service members were deployed; (3) environmental and occupational health threats or exposures during deployment (in theater) and appropriate protective and counter measures; and (4) baseline health status and subsequent health changes. This information is needed to monitor the overall health condition of deployed troops, inform them of potential health risks, as well as maintain and improve the health of service members and veterans. In times of conflict, a military medical surveillance system is particularly critical to ensure the deployment of a fit and healthy force and to prevent disease and injuries from degrading force capabilities. DOD needs reliable medical surveillance data to determine who is fit for deployment; to prepare service members for deployment, including providing vaccinations to protect against possible exposure to environmental and biological threats; and to treat physical and psychological conditions that resulted from deployment. DOD also uses this information to develop educational measures for service members and medical personnel to ensure that service members receive appropriate care. Reliable medical surveillance information is also critical for VA to carry out its missions. In addition to VA’s better known missions—to provide health care and benefits to veterans and medical research and education— VA has a fourth mission: to provide medical backup to DOD in times of war and civilian health care backup in the event of disasters producing mass casualties. As such, VA needs reliable medical surveillance data from DOD to treat casualties of military conflicts, provide health care to veterans who have left active duty, assist in conducting research should troops be exposed to environmental or occupational hazards, and identify service-connected disabilities and adjudicate veterans’ disability claims. Investigations into the unexplained illnesses of service members and veterans who had been deployed to the Gulf uncovered the need for DOD to implement an effective medical surveillance system to obtain comprehensive medical data on deployed service members, including Reservists and National Guardsmen. Epidemiological and health outcome studies to determine the causes of these illnesses have been hampered due to incomplete baseline health data on Gulf War veterans, their potential exposure to environmental health hazards, and specific health data on care provided before, during, and after deployment. The Presidential Advisory Committee on Gulf War Veterans’ Illnesses’ and IOM’s 1996 investigations into the causes of illnesses experienced by Gulf War veterans confirmed the need for more effective medical surveillance capabilities. The National Science and Technology Council, as tasked by the Presidential Advisory Committee, also assessed the medical surveillance system for deployed service members. In 1998, the council reported that inaccurate recordkeeping made it extremely difficult to get a clear picture of what risk factors might be responsible for Gulf War illnesses. It also reported that without reliable deployment and health assessment information, it was difficult to ensure that veterans’ service-related benefits claims were adjudicated appropriately. The council concluded that the Gulf War exposed many deficiencies in the ability to collect, maintain, and transfer accurate data describing the movement of troops, potential exposures to health risks, and medical incidents in theater. The council reported that the government’s recordkeeping capabilities were not designed to track troop and asset movements to the degree needed to determine who might have been exposed to any given environmental or wartime health hazard. The council also reported major deficiencies in health risk communications, including not adequately informing service members of the risks associated with countermeasures such as vaccines. Without this information, service members may not recognize potential side effects of these countermeasures and promptly take precautionary actions, including seeking medical care. In response to these reports, DOD strengthened its medical surveillance system under Operation Joint Endeavor when service members were deployed to Bosnia-Herzegovina, Croatia, and Hungary. In addition to implementing departmentwide medical surveillance policies, DOD developed specific medical surveillance programs to improve monitoring and tracking environmental and biomedical threats in theater. While these efforts represented important steps, a number of deficiencies remained. On the positive side, the Assistant Secretary of Defense (Health Affairs) issued a health surveillance policy for troops deploying to Bosnia. This guidance stressed the need to (1) identify health threats in theater, (2) routinely and uniformly collect and analyze information relevant to troop health, and (3) disseminate this information in a timely manner. DOD required medical units to develop weekly reports on the incidence rates of major categories of diseases and injuries during all deployments. Data from these reports showed theaterwide illness and injury trends so that preventive measures could be identified and forwarded to the theater medical command regarding abnormal trends or actions that should be taken. DOD also established the U.S. Army Center for Health Promotion and Preventive Medicine—a major enhancement to DOD’s ability to perform environmental monitoring and tracking. For example, the center operates and maintains a repository of service members’ serum samples for medical surveillance and a system to integrate, analyze, and report data from multiple sources relevant to the health and readiness of military personnel. This capability was augmented with the establishment of the 520th Theater Army Medical Laboratory—a deployable public health laboratory for providing environmental sampling and analysis in theater. The sampling results can be used to identify specific preventive measures and safeguards to be taken to protect troops from harmful exposures and to develop procedures to treat anyone exposed to health hazards. During Operation Joint Endeavor, this laboratory was used in Tuzla, Bosnia, where most of the U.S. forces were located, to conduct air, water, soil, and other environmental monitoring. Despite the department’s progress, we and others have reported on DOD’s implementation difficulties during Operation Joint Endeavor and the shortcomings in DOD’s ability to maintain reliable health information on service members. Knowledge of who is deployed and their whereabouts is critical for identifying individuals who may have been exposed to health hazards while deployed. However, in May 1997, we reported that the inaccurate information on who was deployed and where and when they were deployed—a problem during the Gulf War—continued to be a concern during Operation Joint Endeavor. For example, we found that the Defense Manpower Data Center (DMDC) database—where military services are required to report deployment information—did not include records for at least 200 Navy service members who were deployed. Conversely, the DMDC database included Air Force personnel who were never actually deployed. In addition, we reported that DOD had not developed a system for tracking the movement of service members within theater. IOM also reported that the location of service members during the deployments were still not systematically documented or archived for future use. We also reported in May 1997 that for the more than 600 Army personnel whose medical records we reviewed, DOD’s centralized database for postdeployment medical assessments did not capture 12 percent of those assessments conducted in theater and 52 percent of those conducted after returning home. These data are needed by epidemiologists and other researchers to assess at an aggregate level the changes that have occurred between service members’ pre- and postdeployment health assessments. Further, many service members’ medical records did not include complete information on in-theater postdeployment medical assessments that had been conducted. The Army’s European Surgeon General attributed missing in-theater health information to DOD’s policy of having service members hand carry paper assessment forms from the theater to their home units, where their permanent medical records were maintained. The assessments were frequently lost en route. We have also reported that not all medical encounters in theater were being recorded in individual records. Our 1997 report identified that this problem was particularly common for immunizations given in theater. Detailed data on service members’ vaccine history are vital for scheduling the regimen of vaccinations and boosters and for tracking individuals who received vaccinations from a specific lot in the event health concerns about the vaccine lot emerge. We found that almost one-fourth of the service members’ medical records that we reviewed did not document the fact that they had received a vaccine for tick-borne encephalitis. In addition, in its 2000 report, IOM cited limited progress in medical recordkeeping for deployed active duty and reserve forces and emphasized the need for records of immunizations to be included in individual medical records. Responding to our and others’ recommendations to improve information on service members’ deployments, in-theater medical encounters, and immunizations, DOD has continued to revise and expand its policies relating to medical surveillance, and the system continues to evolve. In addition, in 2000, DOD released its Force Health Protection plan, which presents its vision for protecting deployed forces. This vision emphasizes force fitness and health preparedness and improving the monitoring and surveillance of health threats in military operations. However, IOM criticized DOD’s progress in implementing its medical surveillance program and the failure to implement several recommendations that IOM had made. In addition, IOM raised concerns about DOD’s ability to achieve the vision outlined in the Force Health Protection plan. We have also reported that some of DOD’s programs designed to improve medical surveillance have not been fully implemented. IOM’s 2000 report presented the results of its assessment of DOD’s progress in implementing recommendations for improving medical surveillance made by IOM and several others. IOM stated that, although DOD generally concurred with the findings of these groups, DOD had made few concrete changes at the field level. For example, medical encounters in theater were still not always recorded in individuals’ medical records, and the locations of service members during deployments were still not systematically documented or archived for future use. In addition, environmental and medical hazards were not yet well integrated in the information provided to commanders. The IOM report notes that a major reason for this lack of progress is no single authority within DOD has been assigned responsibility for the implementation of the recommendations and plans. IOM said that because of the complexity of the tasks involved and the overlapping areas of responsibility involved, the single authority must rest with the Secretary of Defense. In its report, IOM describes six strategies that in its view demand further emphasis and require greater efforts by DOD: Use a systematic process to prospectively evaluate non-battle-related risks associated with the activities and settings of deployments. Collect and manage environmental data and personnel location, biological samples, and activity data to facilitate analysis of deployment exposures and to support clinical care and public health activities. Develop the risk assessment, risk management, and risk communications skills of military leaders at all levels. Accelerate implementation of a health surveillance system that completely spans an individual’s time in service. Implement strategies to address medically unexplained symptoms in populations that have deployed. Implement a joint computerized patient record and other automated recordkeeping that meets the information needs of those involved with individual care and military public health. DOD guidance established requirements for recording and tracking vaccinations and automating medical records for archiving and recalling medical encounters. While our work indicates that DOD has made some progress in improving its immunization information, the department faces numerous challenges in implementing an automated medical record. In October 1999, we reported that DOD’s Vaccine Adverse Event Reporting System, which relies on medical personnel or service members to provide needed vaccine data, may not have included information on adverse reactions because DOD did not adequately inform personnel on how to provide this information. Additionally, in April 2000, we testified that vaccination data were not consistently recorded in paper records and in a central database, as DOD requires. For example, when comparing records from the database with paper records at four military installations, we found that information on the number of vaccinations given to service members, the dates of the vaccinations, and the vaccine lot numbers were inconsistent at all four installations. At one installation, the database and records did not agree 78 to 92 percent of the time. DOD has begun to make progress in implementing our recommendations, including ensuring timely and accurate data in its immunization tracking system. The Gulf War revealed the need to have information technology play a bigger role in medical surveillance to ensure that the information is readily accessible to DOD and VA. In August 1997, DOD established requirements that called for the use of innovative technology, such as an automated medical record device for documenting inpatient and outpatient encounters in all settings and that can archive the information for local recall and format it for an injury, illness, and exposure surveillance database. Also, in 1997, the President, responding to deficiencies in DOD’s and VA’s data capabilities for handling service members’ health information, called for the two agencies to start developing a comprehensive, lifelong medical record for each service member. As we reported in April 2001, DOD’s and VA’s numerous databases and electronic systems for capturing mission-critical data, including health information, are not linked and information cannot be readily shared. DOD has several initiatives under way to link many of its information systems—some with VA. For example, in an effort to create a comprehensive, lifelong medical record for service members and veterans and to allow health care professionals to share clinical information, DOD and VA, along with the Indian Health Service (IHS), initiated the Government Computer-Based Patient Record (GCPR) project in 1998. GCPR is seen as yielding a number of potential benefits, including improved research and quality of care, and clinical and administrative efficiencies. However, our April 2001 report describes several factors— including planning weaknesses, competing priorities, and inadequate accountability—that made it unlikely that DOD and VA would accomplish GCPR or realize its benefits in the near future. To strengthen the management and oversight of GCPR, we made several recommendations, including designating a lead entity with a clear line of authority for the project and creating comprehensive and coordinated plans for sharing meaningful, accurate, and secure patient health data. For the near term, DOD and VA have decided to reconsider their approach to GCPR and focus on allowing VA to view DOD health data. However, under the interim effort, physicians at military medical facilities will not be able to view health information from other facilities or from VA—now a potentially critical information source given VA’s fourth mission to provide medical backup to the military health system in times of national emergency and war. Recent meetings with officials from the Defense Health Program and the Army Surgeon General’s Office indicate that the department is working on issues we have reported on in the past, including the need to improve the reliability of deployment information and the need to integrate disparate health information systems. Specifically, these officials informed us that DOD is in the process of developing a more accurate roster of deployed service members and enhancing its information technology capabilities. For example, DOD’s Theater Medical Information Program (TMIP) is intended to capture medical information on deployed personnel and link it with medical information captured in the department’s new medical information system, now being field tested. Developmental testing for TMIP is about to begin and field testing is expected to begin next spring, with deployment expected in 2003. A component system of TMIP— Transportation Command Regulating and Command and Control Evacuation System—is also under development and aims to allow casualty tracking and provide in-transit visibility of casualties during wartime and peacetime. Also under development is the Global Expeditionary Medical System, which DOD characterizes as a stepping stone to an integrated biohazard surveillance and detection system. Clearly, the need for comprehensive health information on service members and veterans is very great, and much more needs to be done. However, it is also a very difficult task because of uncertainties about what conditions may exist in a deployed setting, such as potential military conflicts, environmental hazards, and frequency of troop movements. While progress is being made, DOD will need to continue to make a concerted effort to resolve the remaining deficiencies in its surveillance system. Until such a time that some of the deficiencies are overcome, VA’s ability to perform its missions will be affected. For further information, please contact Stephen P. Backhus at (202) 512- 7101. Individuals making key contributions to this testimony included Ann Calvaresi Barr, Karen Sloan, and Keith Steck. | The Departments of Defense (DOD) and Veterans Affairs (VA) are establishing a medical surveillance system for the health care needs of military personnel and veterans. The system will collect and analyze information on deployments, environmental health threats, disease monitoring, medical assessments, and medical encounters. GAO has identified weaknesses in DOD's medical surveillance capability and performance during the Gulf War and Operation Joint Endeavor. Investigations into the unexplained illnesses of Gulf War veterans uncovered many deficiencies in DOD's ability to collect, maintain, and transfer accurate data on the movement of troops, potential exposures to health risks, and medical incidents during deployment. DOD has several initiatives under way to improve the reliability of deployment information and to enhance its information technology capabilities, though some initiatives are several years away from full implementation. The VA's ability to serve veterans and provide backup to DOD in times of war will be enhanced as DOD increases its medical surveillance capability. |
EPA has taken some actions but has not fully addressed the findings and recommendations of five independent evaluations over the past 20 years regarding long-standing planning, coordination, and leadership issues that hamper the quality, effectiveness, and efficiency of its science activities, including its laboratory operations. First, EPA has yet to fully address planning and coordination issues identified by a 1992 independent, expert panel evaluation that recommended that EPA develop and implement an overarching issue- based planning process that integrates and coordinates scientific efforts throughout the agency, including the important work of its 37 laboratories. That evaluation found that EPA’s science was of uneven quality and that the agency lacked a coherent science agenda and operational plan to guide scientific efforts throughout the agency. Because EPA did not implement the evaluation’s recommendation, EPA’s programs, regional officials, and ORD continue to independently plan and coordinate the activities of their respective laboratories based on their own offices’ priorities and needs. MITRE Corporation, Center for Environment, Resources, and Space, Assessment of the Scientific and Technical Laboratories and Facilities of the U.S. Environmental Protection Agency (McLean, Va., May 1994). study, an agencywide steering committee formed by EPA to consider restructuring and consolidation options issued a report to the Administrator in July 1994. The steering committee report stated that combining ORD laboratories at a single location could improve teamwork and raise productivity but concluded that, for the near term, ORD should be functionally reorganized but not physically consolidated. Regarding program office laboratory consolidations, the Office of Radiation and Indoor Air did not physically consolidate its laboratories but did administratively and physically consolidate its Las Vegas laboratory with ORD’s Las Vegas radiation laboratory, and the Office of Prevention, Pesticides, and Toxic Substances colocated three of four laboratories with the region 3 laboratory. As for the regional laboratories, the steering committee’s report endorsed the current decentralized regional model but did not provide a justification for its position. National Research Council, Interim Report of the Committee on Research and Peer Review in EPA (Washington, D.C., National Academies Press, 1995); Environmental Protection Agency, Office of Inspector General, Regional Laboratories (Washington, D.C., Aug. 20, 1997); and National Research Council, Strengthening Science at the U.S. Environmental Protection Agency: Research-Management and Peer Review Practices (Washington, D.C., National Academies Press, 2000). technical activities. To date, EPA has not requested authority to create a new position of deputy administrator for science and technology and continues to operate its laboratories under the direction of 15 different senior officials using 15 different organizational and management structures. As a result, EPA has a limited ability to know if scientific activities are being unintentionally duplicated among the laboratories or if opportunities exist to collaborate and share scientific expertise, equipment, and facilities across EPA’s organizational boundaries. On the basis of our analysis of EPA’s facility master planning process, we found that EPA manages its laboratory facilities on a site-by-site basis and does not evaluate each site in the context of all the agency’s real property holdings—as recommended by the National Research Council report in 2004. EPA’s facility master plans are intended to be the basis for justifying its building and facilities spending, which was $29.9 million in fiscal year 2010, and allocating those funds to specific repair and improvement projects. Master plans should contain, among other things, information on mission capabilities, use of space, and condition of individual laboratory sites. In addition, we found that most facility master plans were out of date. EPA’s real property asset management plan states that facility master plans are supposed to be updated every 5 years to reflect changes in facility condition and mission, but we found that 11 of 20 master plans were out of date and 2 of 20 had not been created yet. Because EPA makes capital improvement decisions on a site-by-site basis using master plans that are often outdated, it cannot be assured it is allocating its funds most appropriately. According to officials responsible for allocating capital improvement resources, they try to spread these funds across the agency’s offices and regions equitably but capital improvement funds have not kept pace with requests. The pressure and need to effectively share and allocate limited resources among EPA’s many laboratories were also noted in a 1994 National Academy of Public Administration report on EPA’s laboratory infrastructure, which found that EPA has “too many labs in too many locations often without sufficient resources to sustain a coherent stable program.” In addition, because decisions regarding laboratory facilities are made independently of one another, opportunities to improve operating efficiencies can be lost. Specifically, we found cases where laboratories that were previously colocated moved into separate space without considering the potential benefits of remaining colocated. In one case, we found that the relocation increased some operating costs because the laboratories then had two facility managers and two security contracts and associated personnel because of different requirements for the leased facility. In another case, when two laboratories that were previously colocated moved into separate new leased laboratories several miles apart, agency officials said that they did not know to what extent this move may have resulted in increased operating cost. EPA also does not have sufficiently complete and reliable data to make informed decisions for managing its facilities. Since 2003, when GAO first designated federal real property management as an area of high risk, agencies have come under increasing pressure to manage their real property assets more effectively. In February 2004, the President issued an executive order directing agencies to, among other things, improve the operational and financial management of their real property inventory. The order established a Federal Real Property Council within the Office of Management and Budget (OMB), which has developed guiding principles for real property asset management. In response to a June 2010 presidential memorandum directing agencies to accelerate efforts to identify and eliminate excess properties, in July 2010 EPA reported to the OMB that it does not anticipate the disposal of any of its owned laboratories and major assets in the near future because these assets are fully used and considered critical for EPA’s mission.decisions regarding facility disposal are made using the Federal Real Property Council’s guidance but we found that EPA does not have the information needed to effectively implement this guidance. Specifically, EPA does not have accurate, reliable information regarding (1) the need for facilities, (2) property usage, (3) facility condition, and (4) facility operating efficiency—thereby undermining the credibility of any decisions based on this approach. First, EPA does not maintain accurate data to determine if there is an agency need for laboratory facilities because many facility master plans are often out of date. According to EPA’s asset management plan, the master plans are tools that communicate the link between mission priorities and facilities. However, without up-to-date master plans, EPA does not have accurate data to determine if laboratory facilities are needed for its mission. Second, the agency does not have accurate data on space needs and usage because many facility master plans containing space utilization analyses are out of date. EPA also does not use public and commercial space usage benchmarks—as recommended by the Federal Real Property Council—to calculate usage rates for its laboratories. Instead, EPA measures laboratory usage on the basis of interviews with local laboratory officials. According to EPA officials, they do not use benchmarks because the work of the laboratories varies. In 2008, however, an EPA contractor created a laboratory benchmark based on those used by comparable facilities at the Centers for Disease Control and Prevention, the National Institutes of Health, the Department of Energy, and several research universities to evaluate space at two ORD laboratories in North Carolina. Consequently, we believe that objective benchmarks can be developed for EPA’s unique laboratory requirements. In addition, the contractor’s analysis concluded that EPA could save $1.68 million in annual leasing and $800,000 in annual energy costs through consolidation of the two ORD laboratories. Agency officials told us they hope to consolidate the laboratories in fiscal year 2012 if funds are available. Third, the agency does not have accurate data for assessing facilities’ condition because condition assessments contained in facility master plans are often outdated. The data may also be unreliable because data entered by local facility managers are not verified, according to agency officials. Such verification could involve edit checks or controls to help ensure the data are entered accurately. Fourth, EPA does not have reliable operating cost data for its laboratory enterprise, because the agency’s financial management system does not track operating costs in sufficient detail to break out information for individual laboratories or for the laboratory enterprise as a whole. Reliable operating cost data are important in determining whether a laboratory facility is operating efficiently, a determination that should inform both capital investment and property disposal decisions. EPA does not use a comprehensive planning process for managing its laboratories’ workforce. For example, we found that not all of the regional and program offices with laboratories prepared workforce plans as part of an agencywide planning effort in 2007, and for those that did, most did not specifically address their laboratories’ workforce. In fact, some regional management and human resource officials we spoke with were unaware of the requirement to submit workforce plans to the Office of Human Resources. Some of these managers told us the program and regional workforce plans were a paperwork exercise, irrelevant to the way the workforce is actually managed. Managers in program and regional offices said that workforce planning for their respective laboratories is fundamentally driven by the annual budgets of program and regional offices and ceilings for full-time equivalents (FTE). In addition, none of the program and regional workforce plans we reviewed described any effort to work across organizational boundaries to integrate or coordinate their workforce with the workforces of other EPA laboratories. For example, although two regional workforce plans discussed potential vulnerability if highly skilled laboratory personnel retired, neither plan explored options for sharing resources across regional boundaries to address potential skill gaps. According to EPA’s Regional Laboratory System 2009 Annual Report, many of the regional laboratories provide the same or similar core analytical capabilities— including a full range of routine and specialized chemical and biological testing of air, water, soil, sediment, tissue, and hazardous waste. Nonetheless, in these workforce plans, each region independently determines and attempts to address its individual workforce needs. As a result, by not exploring options for sharing resources among the ORD, program, and regional boundaries to address potential skill gaps, EPA may be missing opportunities to fill critical occupation needs through resource sharing. Moreover, EPA does not have basic demographic information on the number of federal and contract employees currently working in its 37 laboratories. Specifically, EPA does not routinely compile the information needed to know how many scientific and technical employees it has working in its laboratories, where they are located, what functions they perform, or what specialized skills they may have. In addition, the agency does not have a workload analysis for the laboratories to help determine the optimal numbers and distribution of staff throughout the enterprise. We believe that such information is essential for EPA to prepare a comprehensive laboratory workforce plan to achieve the agency’s mission with limited resources. Because EPA’s laboratory workforce is managed separately by 15 independent senior officials, information about that workforce is tracked separately and is not readily available or routinely compiled or evaluated. Instead, EPA has relied on ad hoc calls for information to compile such data. In response to our prior reports on EPA’s workforce strategy and the work of the EPA Inspector General, EPA hired a contractor in 2009, in part to conduct a study to provide information about the agency’s overall workload, including staffing levels and workload shifts for six major functions, including scientific research. In its budget justification for fiscal year 2012, however, the agency reported to Congress that a survey of the existing workload information provided by the contractor will not immediately provide information sufficient to determine whether changes are needed in workforce levels. As of October 2011, EPA had not released the results of this study, and we therefore cannot comment on whether its content has implications for the laboratories. The agency asked its National Advisory Council for Environmental Policy and Technology to help address scientific and technical competencies as it develops a new agencywide workforce plan. However, the new plan is not complete, and therefore it is too early to tell whether the council’s recommendations will have implications for the laboratories. Finally, in our July 2011 report on EPA’s laboratory enterprise we recommended, among other things, that EPA develop a coordinated planning process for its scientific activities and appoint a top-level official with authority over all the laboratories, improve physical and real property planning decisions, and develop a workforce planning process for all laboratories that reflects current and future needs of laboratory facilities. In written comments on the report, EPA generally agreed with our findings and recommendations. Chairman Harris, Ranking Member Miller, this concludes my prepared statement. I would be happy to respond to any questions that you or other members of the subcommittee may have at this time. For further information on this statement, please contact David Trimble at (202) 512-3841 or trimbled@gao.gov. Contact points for our Congressional Relations and Public Affairs offices may be found on the last page of this statement. Other staff that made key contributions to this testimony include Diane LoFaro, Assistant Director; Jamie Meuwissen; Angela Miles; and Dan Semick. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | This testimony discusses the research and development activities of the Environmental Protection Agency (EPA) and the findings of our recent report on the agency's laboratory enterprise. EPA was established in 1970 to consolidate a variety of federal research, monitoring, standard-setting, and enforcement activities into one agency for ensuring the joint protection of environmental quality and human health. Scientific research, knowledge, and technical information are fundamental to EPA's mission and inform its standard-setting, regulatory, compliance, and enforcement functions. The agency's scientific performance is particularly important as complex environmental issues emerge and evolve, and controversy continues to surround many of the agency's areas of responsibility. Unlike other primarily science-focused federal agencies, such as the National Institutes of Health or the National Science Foundation, EPA's scientific research, technical support, and analytical services underpin the policies and regulations the agency implements. Therefore, the agency operates its own laboratory enterprise. This enterprise is made up of 37 laboratories that are housed in about 170 buildings and facilities located in 30 cities across the nation. Specifically, EPA's Office of Research and Development (ORD) operates 18 laboratories with primary responsibility for research and development. Four of EPA's five national program offices operate nine laboratories with primary responsibility for supporting regulatory implementation, compliance, enforcement, and emergency response. Each of EPA's 10 regional offices operates a laboratory with responsibilities for a variety of applied sciences; analytical services; technical support to federal, state, and local laboratories; monitoring; compliance and enforcement; and emergency response. Over the past 20 years, independent evaluations by the National Research Council and others have addressed planning, coordination, or leadership issues associated with EPA's science activities. The scope of these evaluations varied, but collectively they recognized the need for EPA to improve long-term planning, priority setting, and coordination of laboratory activities; establish leadership for agencywide scientific oversight and decision making; and better manage the laboratories' workforce and infrastructure. When it was established in 1970, EPA inherited 42 laboratories from programs in various federal departments. According to EPA's historian, EPA closed or consolidated some laboratories it inherited and created additional laboratories to support its mission. Nevertheless, EPA's historian reported that the location of most of EPA's present laboratories is largely the same as the location of its original laboratories in part because of political objections to closing facilities and conflicting organizational philosophies, such as operating centralized laboratories for efficiency versus operating decentralized laboratories for flexibility and responsiveness. Other federal agencies face similar challenges with excess and underused property. Because of these challenges, GAO has designated federal real property as an area of high risk. This statement summarizes the findings of our report issued in July of this year that examines the extent to which EPA (1) has addressed the findings of independent evaluations performed by the National Research Council and others regarding long-term planning, coordination, and leadership issues; (2) uses an agencywide, coordinated approach for managing its laboratory physical infrastructure; and (3) uses a comprehensive planning process to manage its laboratory workforce. EPA has taken some actions but has not fully addressed the findings and recommendations of five independent evaluations over the past 20 years regarding long-standing planning, coordination, and leadership issues that hamper the quality, effectiveness, and efficiency of its science activities, including its laboratory operations. First, EPA has yet to fully address planning and coordination issues identified by a 1992 independent, expert panel evaluation that recommended that EPA develop and implement an overarching issue-based planning process that integrates and coordinates scientific efforts throughout the agency, including the important work of its 37 laboratories. Second, EPA has also not fully addressed recommendations from a 1994 independent evaluation by the MITRE Corporation to consolidate and realign its laboratory facilities and workforce--even though this evaluation found that the geographic separation of laboratories hampered their efficiency and technical operations and that consolidation and realignment could improve planning and coordination issues that have hampered its science and technical community for decades. Third, EPA has not fully addressed recommendations from the independent evaluations regarding leadership of its research and laboratory operations. On the basis of our analysis of EPA's facility master planning process, we found that EPA manages its laboratory facilities on a site-by-site basis and does not evaluate each site in the context of all the agency's real property holdings--as recommended by the National Research Council report in 2004. EPA's facility master plans are intended to be the basis for justifying its building and facilities spending, which was $29.9 million in fiscal year 2010, and allocating those funds to specific repair and improvement projects. Master plans should contain, among other things, information on mission capabilities, use of space, and condition of individual laboratory sites. In addition, we found that most facility master plans were out of date. EPA's real property asset management plan states that facility master plans are supposed to be updated every 5 years to reflect changes in facility condition and mission, but we found that 11 of 20 master plans were out of date and 2 of 20 had not been created yet. EPA does not use a comprehensive planning process for managing its laboratories' workforce. For example, we found that not all of the regional and program offices with laboratories prepared workforce plans as part of an agencywide planning effort in 2007, and for those that did, most did not specifically address their laboratories' workforce. In fact, some regional management and human resource officials we spoke with were unaware of the requirement to submit workforce plans to the Office of Human Resources. Some of these managers told us the program and regional workforce plans were a paperwork exercise, irrelevant to the way the workforce is actually managed. Managers in program and regional offices said that workforce planning for their respective laboratories is fundamentally driven by the annual budgets of program and regional offices and ceilings for full-time equivalents (FTE). |
Following the September 11, 2001, attacks, the United States, several allies, and Afghanistan’s Northern Alliance forcibly removed the Taliban regime from Afghanistan for providing a safe haven to al Qaeda terrorists. After years of conflict and Taliban rule, the new Afghan government inherited a state with limited capacity to govern; an economy dominated by the narcotics trade; constraints on economic development due, in part, to resource limitations and mountainous terrain (see fig. 1); a poorly developed infrastructure with few roads and little household access to power and running water; and weak national security forces. In April 2002, the United States and other donor nations met in Geneva, Switzerland, to help Afghanistan address threats to its security. At the Geneva conference, the donors established a security reform strategy for Afghanistan: the United States would lead the training of the Afghan army and Germany would lead the police reconstitution effort. However, due, in part, to Afghanistan’s pressing security needs and concerns that the German training program was moving too slowly, the United States expanded its role in the police training effort. As we reported in 2005, according to cognizant German officials, Germany viewed its role in the police sector as one of advising and consulting with donors and the Afghan government rather than as the major implementer or funding source. In 2002, the international community endorsed the decision of the Afghan government to create an ethnically balanced and voluntary ANA force of no more than 70,000. In 2006, this vision was reaffirmed by the Afghan government and the international community through its integration into the Afghanistan National Development Strategy and Afghanistan Compact, documents that articulated economic, social, and security priorities for Afghanistan. These documents also set the end of 2010 as the timeline for the establishment of the ANA. In February 2008, citing increased security challenges, the Afghan government and its international partners endorsed a 10,000-person increase in the force structure of the ANA from 70,000 to 80,000. The strategic role of the Afghan Ministry of Defense and the ANA is to defend and deter aggression against Afghanistan, support and defend the Afghan Constitution, defeat the insurgency and terrorism, and support Afghanistan’s reconstruction and reintegration into the regional and international community, among other things. To accomplish this, the army’s current force structure includes (1) Ministry of Defense and general staff personnel, (2) sustaining institutions, (3) intermediate command staff, (4) combat forces, and (5) Afghan air corps personnel. Combat forces form the basic operational arm of the ANA and are divided into five corps, located in different regions of Afghanistan. A corps contains 1 or more brigades. A typical brigade consists of approximately 2,800 personnel: three light infantry battalions (with approximately 650 personnel each), one combat support battalion (with approximately 500 personnel), and one combat services support battalion (with approximately 350 personnel). (See app. II for additional details on the force structure and functions of the ANA.) U.S. efforts to establish the army are led by Defense, in partnership with the government of Afghanistan. The Defense-staffed CSTC-A oversees the Afghan army’s training, facilities development, assessment, and equipment provision. Under CSTC-A is Task Force Phoenix, a joint coalition task force responsible for training, mentoring, and advising the Afghan army at the Kabul Military Training Center and elsewhere in the country (see fig. 2). The reconstitution of the ANP began in February 2002 when donor nations agreed to establish a multiethnic, sustainable, 62,000-member professional police service committed to the rule of law. In May 2007, the Afghan government and its international partners approved an interim increase in the number of police forces from 62,000 to 82,000, to be reviewed every 6 months. The Afghan government and international community set the end of 2010 as the timeline for the establishment of the ANP force. In addition to enforcing the rule of law, the role of the ANP is to protect the rights of citizens, maintain civil order and public safety, support actions to defeat insurgency, control national borders, and reduce the level of domestic and international organized crime, among other activities. The force structure for the police includes Ministry of Interior headquarters and administrative staff, uniformed police personnel, and several specialized police units. This report primarily focuses on U.S. efforts to build the uniformed police, the largest component of the Afghan police force. (See app. II for further details on the force structure and functions of the ANP.) U.S. efforts to organize, train, and equip the ANP are directed by Defense, through CSTC-A, with support from State, which provides policy guidance to the effort and oversight of civilian contractors implementing police training courses. The primary U.S. contractor involved in the police training effort is DynCorp International, which provides police training courses in criminal investigation, physical fitness, and weapons and survival skills, and civilian mentors to assist in developing the Afghan Ministry of Interior and the police forces it administers. Germany leads efforts to train commissioned and noncommissioned Afghan police officers at the Kabul Police Academy (see fig. 3). The United States provided $16.5 billion from fiscal years 2002 through 2008 to support the training and equipping of the Afghan army and police (see table 1). Slightly over 45 percent (approximately $7.6 billion) of the funding was provided in fiscal year 2007, in an effort to accelerate the training and equipping and enhance the capabilities of the ANSF. These figures do not include certain operational costs, such as the personnel costs for U.S. servicemembers assigned to the training and equipping mission. (See app. I for further details on our methodology.) More than 40 nations and international organizations have also provided funds, equipment, or personnel to support U.S. efforts to train and equip the ANSF. As of March 2008, non-U.S. donors have provided about $820 million in support of efforts to develop the ANSF: approximately $426 million was provided to supplement efforts to train and equip the Afghan army and about $394 million in support of the Afghan police. Over 15 nations contribute mentors to the army, providing approximately one-third of the personnel who assist in training ANA units in the field. The EU has provided 80 mentors to assist the police at the ministerial, regional, and provincial levels out of approximately 215 pledged. Additionally, the United Nations Development Programme administers the Law and Order Trust Fund for Afghanistan, which provides reimbursement to the Afghan government for police salaries. Approximately 80 percent of international donations for the ANP have supported programs through the Law and Order Trust Fund for Afghanistan (about $311 million of about $394 million). We previously identified the need for detailed plans to complete and sustain the ANSF. In June 2005, GAO reported that the Secretaries of Defense and State should develop detailed plans for completing and sustaining the ANSF that contain clearly defined objectives and performance measures, milestones for achieving stated objectives, future funding requirements, and a strategy for sustaining the results achieved. Our report recommended that the Secretaries provide this information to Congress when the executive branch requests funding for the Afghan army or police forces. Although Defense and State generally concurred with this recommendation, both suggested that existing reporting requirements addressed the need to report to Congress their plans for completing and sustaining the Afghan army and police forces. Our analysis of Defense and State reporting to Congress determined that the departments did not have the detailed plans we recommended to guide the development of the ANSF and to facilitate congressional oversight. As a result, in our 2007 report, we reiterated the need for Defense and State to develop such plans. Following our reports, in 2008, Congress mandated that the President, acting through the Secretary of Defense, submit reports to Congress on progress toward security and stability in Afghanistan, including a comprehensive and long-term strategy and budget for strengthening the ANSF. Reports must be submitted every 180 days after that date, through the end of fiscal year 2010. The first such report was due by the end of April 2008, but has yet to be provided to Congress. In addition, Congress also mandated that Defense submit reports on a long-term detailed plan for sustaining the ANSF. Reports must be submitted every 180 days after that date, through the end of fiscal year 2010. The first such report was due by the end of April 2008, but has yet to be provided to Congress. Defense and State have not developed a coordinated, detailed plan for completing and sustaining the Afghan army and police forces, despite our recommendation in 2005 and a mandate from Congress in 2008 that such a plan be developed. Defense provided GAO a 5-page document in January 2007 that, according to Defense officials, is intended to meet GAO’s recommendation. However, it does not include several of the key elements identified in our recommendation and does not provide a sufficient level of detail for effective interagency planning and congressional oversight. Although CSTC-A has developed a field-level plan in Afghanistan that integrates the Afghan government’s interest, this represents military planning and is not a coordinated Defense and State plan with near- and long-term resource requirements. Without a coordinated, detailed plan containing the elements identified in our 2005 recommendation, as noted earlier, congressional oversight concerning the extent and cost of the U.S. commitment to train and equip the ANSF is difficult, and decision makers may not have sufficient information to assess progress and allocate defense resources among competing priorities. As of March 2008, neither Defense nor State had developed a coordinated, detailed plan for completing and sustaining the ANSF that includes clearly defined objectives and performance measures, milestones for achieving stated objectives, and a strategy for sustaining the results achieved, including transitioning program responsibility to Afghanistan. In January 2007, Defense provided us a 5-page document that, according to Defense officials, is intended to meet GAO’s 2005 recommendation for detailed plans to complete and sustain the ANSF. Although Defense and State are partners in training the ANP, the Defense document does not describe the role of State or other key stakeholders. State also did not contribute to the development of this document and has not developed a plan of its own. In addition, U.S. military officials responsible for the effort to build the ANSF were not familiar with the document at the time of our visit to Kabul in August 2007—over 6 months after we received the document from Defense officials in Washington. The 5-page document that Defense developed in response to our 2005 recommendation is limited in scope and detail. For example, although the document provides some broad objectives and performance measures for training and equipping the ANSF, it identifies few milestones. Further, these milestones are not consistent with long-term milestones identified in field documents generated by U.S. military forces operating in Afghanistan and do not include intermediate milestones for judging progress in the medium term. The document provides no mechanism for measuring progress over time against established baselines, other than monthly status reports from the field. These status reports, while useful in identifying month-to-month progress in unit capabilities, use baselines that generally go back no more than 18 months. As such, it is difficult to identify progress since the start of the program and the effect that various factors, such as additional resources, have had on training and equipment availability, as discussed in prior GAO work. Defense’s 5-page document, in response to our 2005 recommendations, does not provide a detailed strategy for sustaining the ANSF. Defense currently estimates that no additional money, beyond the approximately $16.5 billion provided between fiscal years 2002 to 2008, is needed to complete the initial training and equipping of a 70,000-person army force and an 82,000-person police force. If the force structure grows, Defense officials acknowledged that budgetary requirements likely would also increase. In February 2008, the Afghan government and its international partners endorsed an increase in the force structure of the ANA by 10,000. A Defense official stated that increasing the force structure by 10,000 additional army personnel will cost approximately an additional $1 billion. In addition, Defense estimates that approximately $2 billion a year will be needed for the next 5 years to sustain the ANSF—$1 billion for the Afghan army and $1 billion for the police. This is based on a 152,000-person end- strength—70,000 ANA and 82,000 ANP. Defense officials estimate that increasing the ANA force structure by 10,000 will cost about $100 million annually to sustain. By comparison, in 2005, Defense and State estimated the cost to sustain an ANA force of 70,000 and an ANP force of 62,000 would total $600 million per year. This sustainment estimate, however, did not include the cost of sustaining capabilities such as airlift, which is currently being developed for the Afghan army. Defense expects the sustainment transition to begin in fiscal year 2009. According to U.S. military officials in Afghanistan and the recently approved CSTC-A Campaign Plan, U.S. involvement in training and equipping the ANSF may extend beyond a decade. However, neither Defense nor State has identified funding requirements or forecasts beyond 2013. As noted earlier, the United States has been a major contributor to this mission, providing approximately $16.5 billion between fiscal years 2002 and 2008 to train and equip the forces. At present, Afghanistan is unable to support the recurring costs of its security forces, such as salaries and equipment replacement, without substantial foreign assistance. An international task force studying the effects of increasing the size of the ANP noted that by 2012, if the Afghan Ministry of Finance assumed responsibility for police salaries, the salary costs for an 82,000 police force could total nearly 9 percent of the Afghan government’s budget—a cost that could mean large cuts in other programs, such as education, health, and other social services, according to the task force. U.S. officials stated that until Afghan revenues increase substantially, the international community would likely need to assist in paying sustainability costs, including some salaries. However, despite what U.S. military officials in Afghanistan stated, Defense officials in Washington have not indicated how long and in what ways the U.S. government expects to continue assisting the ANSF. In a briefing on the U.S. approach to sustaining the ANSF, Defense and State officials stated that sustainment costs will be transitioned to the government of Afghanistan commensurate with the nation’s economic capacity, and that the United States and the international community will need to assist Afghanistan in developing revenues and capacity to sustain the army and police. For example, Defense and State officials stated that greater revenues could be obtained by improving border management, noting that customs duties generate more than half of Afghanistan’s revenues. These officials, however, did not identify any other ways to improve revenues for the security sector nor did they identify whether this information is being incorporated into a broader plan for developing and sustaining the ANSF when we inquired about such a plan. Since GAO reported in 2005, field-level planning for the training and equipping of the ANSF has improved. In January 2008, CSTC-A completed a field-level plan for ANSF development, and an operations order with further detail on the development and execution of the fiscal year 2008 ANSF force generation program. The Campaign Plan for the Development of Afghan National Military and Police Forces (Campaign Plan) is a military plan. It provides field-level goals, objectives, and capability milestones for the development of the Ministries of Defense and Interior, including Afghan army and police forces. With a new emphasis on quality training, the plan extends the time frames for ANSF development beyond those reported in our 2005 report. However, while this military plan provides needed field guidance, it is not a coordinated Defense and State plan with near- and long-term resource requirements. When we last reported in 2005, Defense had not fully implemented or been able to reach agreement on criteria for assessing an Afghan army unit’s readiness to operate without training assistance. Since that time, Defense has developed criteria—called capability milestones (CM)—to assess army and police progress in manning, training, and equipping the forces. Units are assessed against four capability milestones that range from CM1 to CM4. A unit, agency, staff function, or installation rated at CM1 is fully capable of conducting its primary operational mission but may require assistance from the international community in certain situations. For instance, a combat unit capable of operating at CM1 is fully capable of planning, executing, and sustaining counterinsurgency operations at the battalion level; however, coalition support may be required for certain capabilities, such as close air support, medical evacuation, or indirect fire support. By contrast, a unit, agency, or staff function rated at CM4 has been established but is not yet capable of conducting its primary operational mission and can only undertake portions of its mission with significant assistance, and reliance on, international support. The table below provides descriptions of the capability milestones, as identified in the CSTC-A Campaign Plan. The Campaign Plan identifies three key phases in the development of Afghan army and police forces: fielding/generating forces, developing forces, and transitioning to strategic partnership. Table 3 describes these phases and their corresponding milestones. It is not clear from the Campaign Plan whether the milestones are based on an ANA force structure of 70,000 or 80,000. If based on 70,000, the milestones would likely need to be revisited. Milestone dates for the accomplishment of certain objectives have been extended beyond those reported earlier. For example, our 2005 report states that Defense officials estimated that basic training for 43,000 ground combat troops would be accomplished by the fall of 2007. However, the Campaign Plan extends this date to mid-2010. According to the CSTC-A Commander, given resource constraints and the new emphasis on fielding quality forces, certain deadlines for the fielding, generation, and development of Afghan forces have had to be extended. In addition to capability milestones, personnel and equipment requirements have also been established since our last report. In 2005, we noted that documents identifying personnel and equipment requirements for the Afghan National Security Forces were not complete. However, since that time, the Afghan Ministries of Defense and Interior, assisted by CSTC-A, have completed personnel and equipment requirements, known as Tashkils. The Tashkils list in detail the authorized staff positions and equipment items for the ANA and ANP. Moreover, ANA Tashkils have been converted into an electronic force management database by the U.S. Army Force Management Support Agency, which provides standardization and consistency given frequent CSTC-A personnel rotations. Agency officials expect that the ANP Tashkils will also be converted to a similar system. The United States has invested over $10 billion to develop the ANA since 2002, but less than 2 percent (2 of 105 ANA units rated) are assessed at CM1—full operational capability. Building an Afghan army that can lead its own operations requires manning, training, and equipping army personnel; however, U.S. efforts to build the Afghan army have faced challenges in all of these areas. First, while the ANA has increased in size and basic recruiting is strong, the ANA has experienced difficulties manning the army, such as finding qualified candidates for leadership positions and retaining personnel. Second, the insufficient number of U.S. trainers and coalition mentors in the field is a major impediment to providing the ANA with the follow-up training, including in areas such as advanced combat skills and logistics, needed to sustain the force in the long term. Finally, ANA combat units report significant shortages in approximately 40 percent of items defined as critical by Defense, including machine guns and vehicles. Some of these challenges, such as shortages of mentors and equipment, are due in part to competing global priorities, according to senior Defense officials. Without resolving these challenges, the ability of the ANA to reach full capability may be delayed. Defense planning calls for the development of an 80,000-person ANA force structure that includes Ministry of Defense personnel, sustaining institutions, and infantry forces capable of accomplishing its mission with limited assistance from the international community. As of April 2008, Defense reports that approximately 58,000 army personnel received training and were assigned to the ANA. The chart below details the number of ANA forces authorized compared with the number currently assigned (see table 4). Since we reported in 2005, more personnel have been trained and assigned to the ANA. Specifically focusing on combat troops, Defense reports that 37,866 combat troops have been trained and assigned to the ANA as of April 2008, compared with 18,300 troops in March 2005. Although this represents more than a twofold increase in the amount of combat troops, it is approximately 5,000 forces less than Defense had predicted would be trained by fall 2007. Moreover, new positions have been added to the ANA’s structure since our 2005 report, including an expanded Afghan air corps and the ANA force structure has increased to 80,000. While more troops have received training, as of April 2008, only two ANA units—out of 105 rated—are assessed as CM1—fully capable. Thirty-six percent of ANA units (38 of 105 rated units) are assessed at CM2 and are capable of conducting their primary mission with routine international support. The remaining ANA units are less capable. Thirty-one percent (32 of 105 rated units) are CM3—capable of partially conducting their primary mission, but reliant on international support; 11 percent (11 of 105 rated units) are CM4—formed but not yet capable; and 21 percent (22 of 105 rated units) are not yet formed or not reporting (see table 5). While few ANA units are rated as fully capable, Defense officials stated that ANA troops had performed well in combat situations. Personnel assigned to mentor the ANA that we interviewed in Afghanistan praised the efforts of Afghan troops, and U.S. and Afghan officials stated they were pleased with the development of the army to date. The expected date when the ANA will gain the capability to assume lead responsibility for its own security is unclear. As of April 2008, monthly reports provided by CSTC-A show the expected date of full ANA capability as March 2011. However, this date does not account for shortfalls in the required number of mentors and trainers. Thus, Defense officials cautioned that currently predicted dates for the achievement of a fully capable Afghan army are subject to change and may be delayed. U.S. efforts to build the ANA have faced challenges in manning the army, such as recruiting for leadership positions and retaining personnel; shortfalls in the number of U.S. trainers and coalition mentors deployed with ANA units in the field to assist in developing capable ANA forces; and shortages of critical equipment items. Although the ANA has grown in numbers, it faces manning challenges, including absenteeism, recruitment of leaders and specialists, and retention of personnel. First, although approximately 32,700 combat personnel received training and were assigned to one of the five ANA corps, the number of combat troops on hand is less than those trained and assigned due to attrition, absenteeism, scheduled leave, and battlefield casualties. As of February 2008, Defense reported that about 20 percent of combat personnel assigned were not present for duty (see fig. 4). Although some of those absent from the army may have scheduled their absence or been killed in duty, Defense assessment reports from November 2007 to February 2008 show between 8 and 12 percent of combat unit personnel were absent without leave (AWOL), with AWOL rates as high as 17 percent for soldiers in one ANA corps. For the ANA to achieve sustained growth, a senior Defense official stated that AWOL rates should be no higher than 8 percent. Officials attributed these absences to a variety of causes, such as soldiers leaving their units to take their pay home and the lack of significant penalties for such absences. To address these issues, the Afghan Ministry of Defense, assisted by CSTC-A, has initiated programs to allow soldiers to transfer their pay to family members and to facilitate the deposit of ANA salaries directly into soldiers’ bank accounts. CSTC-A officials stated these programs should reduce AWOL rates. Second, although basic recruiting is strong, the ANA is experiencing difficulties finding qualified candidates for leadership and specialist positions. Defense reports that recruiting goals for ANA infantry positions have been met, despite adjustments to increase ANA training output by 6,000 soldiers annually. However, CSTC-A noted shortfalls in the number of candidates available for non-commissioned officer (NCO) and specialty skill positions, such as logistics and medical support. Between November 2007 and February 2008, ANA manning levels for NCOs ranged between 50 to 70 percent of the authorized number. NCOs provide a vital link between senior officers and soldiers and can provide leadership to ANA units in the field, according to a senior Defense official. Officials attributed the shortage to the low level of literacy among ANA recruits. CSTC-A is attempting to address this shortfall by promoting NCOs from within ANA ranks and implementing new programs to target literate recruits. CSTC-A expects to have greater than 90 percent of the ANA’s authorized NCOs staffed by summer 2008. The ANA is also experiencing difficulties manning specialist positions such as logistics, medical support, and engineering. Although the ANA has developed courses to train military specialists, the current Afghan army is comprised primarily of infantry forces. This is, in part, because ANA recruits learn basic infantry skills first. However, this focus is also due to difficulties identifying candidates who are suitable for advanced training. According to Defense officials, without suitably trained support personnel, the ANA will need to rely on coalition forces to provide support services. Third, the ANA is facing challenges retaining personnel. A typical ANA contract lasts for 3 years. At the end of a contract, ANA personnel are given the opportunity to re-enlist with the Afghan army. Between March 2006 and February 2008, just over half of those combat personnel eligible to re-enlist opted to do so, as shown in table 6. U.S. and Afghan officials attributed these re-enlistment rates to a variety of factors, such as stationing soldiers away from their families, the rapid pace of ANA military operations, and the higher salaries offered by private companies and insurgent groups looking to recruit trained Afghan soldiers. To address these factors, a senior Defense official stated that the Ministry of Defense and CSTC-A are discussing the implementation of several programs such as allowing re-enlisting soldiers greater choice in determining where they will be stationed and increasing re-enlistment bonuses. Without the ability to retain trained personnel, ANA units will continue to lack experience and thus may be delayed in reaching their ability to lead security operations. For instance, in November 2007, the capability assessment of the ANA’s 209th corps in northern Afghanistan lowered the rating of one of its battalions from CM2 to CM3 when the battalion failed to retain approximately half of its NCOs. Further, the assessment noted that progress developing the capability of this battalion could be delayed nearly a year. Although some U.S. embedded trainers or coalition mentors are present in every ANA corps, the ANA is experiencing shortages in the number of these required personnel to assist in its development. According to CSTC- A’s Campaign Plan, after an ANA unit is fielded, either an embedded training team (comprised of U.S. personnel) or a mentoring team (comprised of coalition personnel) should be assigned to the unit. These teams are responsible for developing the skills of ANA army units from initial fielding until the unit has developed the capability to assume lead responsibility for its security mission. As the ANA unit builds capability, embedded trainers and mentors guide and assess the units and provide them with access to air support and medical evacuation. Shortages exist in the number of embedded trainers and mentors fielded. For instance, as of April 2008, the United States has fielded 46 percent (1,019 of 2,215) of Defense’s required number of embedded trainers. Officials attributed these shortfalls to competing U.S. priorities for Defense personnel, including the war in Iraq. CSTC-A has submitted requests for additional forces to act as embedded trainers to assist the ANA; however, the request has been deferred. As of April 2008, members of the international community assisting in this effort have fielded 32 out of 37 mentor teams promised, although the number of international mentors in the field is smaller than the number of U.S embedded trainers. Approximately one-third of personnel in the field assisting ANA unit development are coalition mentors, while two-thirds are U.S. personnel. Without adequate training or mentoring, the ANA’s ability to take the lead in security operations may be delayed. First, Defense officials have cited an insufficient number of embedded trainers and coalition mentors deployed with units in the field as the major impediment to providing the ANA with the training it needs to establish the capabilities necessary to sustain the force in the long term, such as maneuver skills in battalion- level operations, intelligence collection, and logistics. Without these skills, smaller ANA units cannot operate collectively at the battalion level, must rely on the coalition for support tasks, and cannot assume the lead for their own security. Secondly, as ANA units achieve greater levels of capability, embedded trainers and mentors are responsible for assessing and validating their progress. CSTC-A’s Campaign Plan states that the validation process is intended to improve collective training of ANA units; however, without adequate numbers of U.S. embedded trainers and coalition mentors, this validation will be slowed. CSTC-A officials stated that this delay in validation would lengthen the amount of time it will take the ANA to achieve full capability. Moreover, Defense officials noted that, as the number of ANA units fielded increases, the number of U.S. embedded training and coalition mentoring personnel needed also rises. For instance, when we visited Afghanistan in August 2007, Defense officials stated 73 U.S.-embedded training and coalition mentoring teams were needed to assist the development of the ANA; however, Defense officials projected that by December 2008 103 teams would be needed. Without additional training and mentoring personnel to meet this increased need, delays in ANA development will likely be exacerbated. Since we reported in 2005, new equipment plans for the ANA have been implemented and the ANA has received more equipment items. In 2005, Defense planned to equip the Afghan army with donated and salvaged weapons and armored vehicles. However, much of this equipment proved to be worn out, defective, or incompatible with other equipment. In 2006, Defense began providing some ANA forces with U.S. equipment. Further, as security deteriorated, equipment needs changed and Defense planned to provide more protective equipment, such as armored Humvees, and more lethal weapons, such as rocket-propelled grenades. In support of these efforts, approximately $3.7 billion was provided between fiscal years 2005 and 2008 to equip the ANA. As of February 2008, CSTC-A reports that the ANA combat forces are equipped with 60 percent of items defined as critical by Defense, a 7 percentage point increase since August 2007. Despite these advances, shortages exist in a number of equipment items defined as critical by Defense. For instance, of 55 critical equipment items for ANA combat forces, CSTC-A reports having less than half of the required amount on hand for 21 of these items. Types of critical equipment items with significant shortfalls include vehicles, weapons, and communication equipment (see table 7). Although shortfalls exist for certain items defined as critical by Defense, such as NATO-standard machine guns, this does not necessarily mean that the ANA is unequipped. Defense officials stated that while ANA forces wait to receive NATO-standard weapons, Eastern bloc substitutes will be used. However, several ANA combat corps reported shortages in these items as well. For instance, each month between November 2007 and February 2008 at least 2 of 5 ANA corps reported shortages in Eastern bloc anti-tank weapons and 1 of 5 ANA corps reported shortages in Eastern bloc light machine guns. Moreover, shortfalls in items for which no Eastern bloc substitute is being used, such as communication equipment and cargo trucks, were reported in every ANA combat corps in February 2008. Defense officials attribute these shortfalls to a variety of factors, such as competing global priorities for equipment, production delays, and delayed receipt and execution of fiscal year 2007 funding, among other reasons. As equipment orders are filled, ANA units may not be the top priority to receive certain equipment items. CSTC-A officials said that U.S. soldiers currently in combat have first priority to receive some of the equipment that is also requested for the ANA, while security forces in other nations, such as Iraq, may also be higher priority than the Afghan army. When U.S. forces or other nations have higher priority to receive equipment, CSTC-A officials noted that ANA orders are delayed. Officials at the U.S. Army Security Assistance Command also stated that Iraq may be a higher priority than Afghanistan, while a senior official from the Defense Security Cooperation Agency (DSCA) stated that other nations, such as Georgia and Lebanon, may also receive higher priority. Furthermore, production delays for certain equipment items may contribute to equipment shortfalls. For instance, CSTC-A officials stated that due to production delays, certain equipment items, such as NATO-standard heavy machine guns and mortars, were not currently available and would not likely be delivered until 2009 or 2010. Similarly, Defense officials in Washington, D.C., stated that production limitations were responsible for some equipment shortages, particularly in the case of NATO-standard mortars. Additional factors cited as contributing to equipment shortages included delayed receipt and execution of fiscal year 2007 funding, accelerated fielding of ANA units, and difficulties distributing equipment to the field. One method to help address shortages while western equipment is delayed is through increased equipment donations from the international community. CSTC-A is currently seeking additional contributions, particularly of Eastern bloc equipment, such as the basic soldier assault rifle. Between 2002 and March 2008, over 40 non-U.S. donors provided approximately $426 million to assist in the training and equipping of the ANA. Eighty-eight percent of this support has been in the area of equipment; however, the value of equipment donations is determined by the donor, according to CSTC-A officials. The quality of this donated equipment has been mixed (see fig. 5), and delivery of some donations has been delayed due to limited funds to pay for shipments into Afghanistan. To address quality issues, NATO and CSTC-A have established procedures to verify that international donations comply with current needs for the ANA and, if necessary, verify the condition and completeness of equipment. Furthermore, to defray the cost of shipments into Afghanistan, a NATO-administered trust fund has been established to support the transportation of equipment into Afghanistan. However, Defense officials stated that the amount of money in the trust fund, which they estimated to be approximately $1 million, is limited and may not support the transportation of a large number of donations. Additionally, CSTC-A has also set aside funding to transport donated goods when required. The development of capable ANA forces may be delayed by shortages in equipment, as units cannot be certified as fully capable in equipment unless they have 85 percent or more of their critical equipment items. CSTC-A anticipates that all ANA brigades will be equipped to at least 85 percent of requirements for critical equipment items by December 2008; however, according to Defense’s March 2008 monthly status report, expected dates for achieving CM1 in equipment were pushed back for 12 of 14 combat brigades by between 1 to 7 months due, in part, to delayed delivery and distribution of items such as vehicles and weapons. Moreover, shortages in equipment items may hinder training efforts, since having certain equipment items on hand, such as trucks, may be necessary to teach ANA personnel logistics and maintenance skills. Although the ANP has reportedly grown in number since 2005, after an investment of nearly $6 billion, no police unit is assessed as fully capable of performing its mission. Development of an Afghan police force that is fully capable requires manning, training, and equipping of police personnel. However, the United States faces challenges in several areas related to these efforts to build a capable police force. First, less than one- quarter of the ANP has police mentors present to provide training in the field and verify that police are on duty. Second, the Afghan police have not received about one-third of the equipment items Defense considers critical, and continue to face shortages in several categories of equipment, including trucks, radios, and body armor. In addition, Afghanistan’s weak judicial system hinders effective policing, and our analysis of status reports from the field indicates that the ANP consistently experiences problems with police pay, corruption, and attacks, including by insurgents. Recognizing that these challenges hamper ANP development, Defense began a new long-term initiative in November 2007 to reconstitute the uniformed police—the largest component of the Afghan police. However, the continuing shortfall in police mentors may pose a risk to the initiative’s success. Defense defines a fully capable 82,000-person ANP force as one that is able to independently plan, execute, and sustain operations with limited coalition support. However, Defense reporting indicates that, as of April 2008, no police unit was assessed as fully capable of performing its mission (see table 8). Furthermore, among rated units, about 96 percent (296 of 308) of uniformed police districts and all border police battalions (33 of 33), which together comprise about three-fourths of the ANP’s authorized end-strength, were rated at CM4—the lowest capability rating. Six of the remaining 12 uniformed police districts were rated at CM2, and the other 6 at CM3. Overall, Defense assessed approximately 4 percent (18 of 433 units rated) of police units as partially capable and about 3 percent (12 of 433 units rated) as capable of leading operations with coalition support. According to Defense reporting as of April 2008, the expected date for completion of a fully capable Afghan police force is December 2012—a date that conflicts with the Afghan government and international community benchmark of establishing police forces that can effectively meet Afghanistan’s security needs by the end of 2010. Defense reporting indicates that, as of April 2008, nearly 80,000 police had been assigned out of an end-strength of 82,000 (see table 9). This is an increase of more than double the approximately 35,000 we previously reported as trained as of January 2005. Despite this reported increase in police manning, it is difficult to determine the extent to which the police force has grown. As we noted in May 2007, the Afghan Ministry of Interior produces the number of police assigned and the reliability of these numbers has been questioned. A Defense census undertaken since our May 2007 report raises additional concerns about these manning numbers. Specifically, Defense conducted a census to check the reliability of ministry payroll records and reported in September 2007 that it was unable to verify the physical existence of about 20 percent of the uniformed police and more than 10 percent of the border police listed on the ministry payroll records for the provinces surveyed. Because Defense’s census did not cover all 34 Afghan provinces, these percentages cannot be applied to the entire police force. Nonetheless, the results of Defense’s census raise questions about the extent to which the ANP has grown since our 2005 report. According to Defense officials, the shortage of available police mentors has been a key impediment to U.S. efforts to conduct training and evaluation and verify that police are on duty. Police mentor teams in Afghanistan consist of both civilian mentors, who teach law enforcement and police management, and military mentors, who provide training in basic combat operations and offer force protection for the civilian mentors. As we reported in 2005, international peacekeeping efforts in Bosnia, Kosovo, and East Timor have shown that field-based training of local police by international police mentors is critical to the success of similar programs to establish professional police forces. Such training allows mentors to build on classroom instruction and provide a more systematic basis for evaluating police performance. Defense reporting indicates that, as of January 2008, less than one-quarter of the ANP had police mentor teams present. DynCorp, State’s contractor for training and mentoring the police, was able to provide about 98 percent (540 of 551) of the authorized number of civilian mentors as of April 2008. However, as of the same date, only about 32 percent (746 of 2,358) of required military mentors were present in country. Due to this shortage of military mentors to provide force protection, movement of available civilian mentors is constrained—a serious limitation to providing mentor coverage to a police force that is based in more than 900 locations around the country and, unlike the army, generally operates as individuals, not as units. Moreover, a knowledgeable CSTC-A official stated that additional civilian mentors would not help to address the shortfall in military mentors because they could not be deployed to the field without military mentors to provide protection. According to Defense officials, the shortfall in military mentors for the ANP is due to the higher priority assigned to deploying U.S. military personnel elsewhere, particularly Iraq. While the United States and the EU have taken steps to provide additional police mentors, the extent to which these efforts will address current shortfalls is unclear. In January 2008, Defense announced that approximately 1,000 Marines would be sent to Afghanistan in the spring of 2008 on a one-time, 7-month deployment to assist in the training and development of the ANP. However, this temporary deployment will neither fully nor permanently alleviate the underlying shortage of military mentors for the ANP, which stood at over 1,600 as of April 2008. In June 2007, the EU established a police mission in Afghanistan with the objective of providing nearly 200 personnel to mentor, advise, and train the Ministry of Interior and ANP. According to State, the number of EU personnel pledged has subsequently increased to about 215. However, State figures indicate that the EU had staffed about 80 personnel as of February 2008— less than 40 percent of its pledged total. Moreover, State officials said that restrictions in the EU mandate limit the extent to which its personnel are permitted to provide field-based training. Defense, State, and DynCorp officials all identified the continuing shortfall in police mentors as a challenge to U.S. efforts to develop the Afghan police. Specifically, the mentor shortage has impeded U.S. efforts in three areas: Field-based training: Senior Defense officials, including the commanding general of CSTC-A, stated that the ongoing shortfall in police mentors has been the primary obstacle to providing the field-based training necessary to develop a fully capable police force. In addition, State has reported that a significant increase in mentoring coverage is essential to improving the quality of the police through field-based training. DynCorp officials also acknowledged the shortage of mentors to be a challenge to providing necessary training. Evaluation: According to a knowledgeable CSTC-A official, the shortage of police mentors is a serious challenge to evaluating the capability of the police and identifying areas in need of further attention. Defense recently introduced a monthly assessment tool to be used by mentors to evaluate police capability and identify strengths and weaknesses. However, CSTC-A identified extremely limited mentor coverage of the police as a significant challenge to using this tool to generate reliable assessments. As of February 2008, police mentors were able to assess only about 11 percent of uniformed police districts using this new tool. Verification of police on duty: The shortage of available police mentors has impeded U.S. efforts to verify the number of Afghan police on duty. For example, as of April 2008, Defense could not verify whether any police were reporting for duty in 5 of Afghanistan’s 34 provinces due to the lack of mentors. Furthermore, although Defense has planned to conduct monthly surveys to determine how many police are reporting for duty in selected districts, a knowledgeable CSTC-A official stated that mentors are not available to conduct surveys. However, a random sample of 15 police districts conducted by the United Nations found fewer than half of authorized police reporting for duty. Without sufficient police mentors present to conduct field-based training and evaluation and verify police manning, development of fully capable, fully staffed Afghan police forces may continue to be delayed. Although DynCorp has been able to provide nearly all of the authorized number of civilian mentors, DynCorp stated that the activities of these mentors have been complicated by a dual chain of command between State and Defense. According to a 2005 interagency decision, Defense is responsible for directing the overall U.S. effort to train and equip the Afghan police, while State is responsible for providing policy guidance and management of the DynCorp contract. According to DynCorp, this dual chain of command has affected its efforts in multiple ways, such as by producing conflicting guidance and complicating reporting, placement of personnel, the use of facilities, and training and mentoring activities. Between fiscal years 2005 and 2008, Congress made available $5.9 billion to support the training and equipping of the ANP. At least $1.3 billion of that amount, or 22 percent, has been directed toward equipment purchases. Although equipping of the police has improved in recent months, shortages remain in several types of equipment that Defense considers critical. Since our August 2007 visit to Afghanistan, the percentage of critical ANP equipment on hand has grown from 53 to 65 percent as of February 2008. This improvement includes increased totals of items on hand, such as rifles and grenade launchers. Further, Defense anticipates the ANP will be equipped with 85 percent of critical equipment items by December 2008. However, as of February 2008, shortages remained in several types of critical equipment, such as trucks, radios, and body armor. Defense officials cited several factors that have contributed to these shortages. First, according to CSTC-A officials, equipment shortages are due to competing priorities, particularly the need to equip U.S. forces deployed to operational situations and security forces in Iraq. Second, CSTC-A attributed the specific shortage in body armor to the inability of two supplying contractors to deliver the requested items on schedule. Third, Defense officials cited additional causes of equipment shortages such as delayed receipt and execution of fiscal year 2007 funding and instances where CSTC-A did not provide equipment orders in a timely manner. Defense officials and documentation also indicated that distributing equipment to police in the field once it is procured is challenging due to the unstable security situation, difficult terrain, weather conditions, and the remoteness of some police districts. In addition, Defense officials expressed concerns with the quality and usability of thousands of weapons donated to the police. For example, officials estimated that only about 1 in 5 of the nearly 50,000 AK-47 automatic rifles received through donation was of good quality. Our analysis of weekly progress reports produced in 2007 by DynCorp civilian police mentors provides additional evidence of equipment-related challenges and other logistical difficulties. Specifically, 88 percent (46 of 52) of weekly reports contained instances of police operating with equipment of insufficient quality or quantity or facing problems with facilities or supplies. For example, the reports include several cases where Afghan border police are inadequately equipped to defend their positions on the border or face insurgent forces. Recognizing this shortcoming, CSTC-A has planned to equip the border police with heavy machine guns, which it expects to arrive in the fall of 2008. In addition, 81 percent (42 of 52) of weekly reports contained examples of limited police ability to account for the equipment provided to them. In July 2007, CSTC-A initiated efforts to train the police in basic supply and property accountability procedures. According to CSTC-A, equipment is no longer being issued to police districts unless the districts’ property officers are first trained. For example, more than 1,500 trucks have been on hand and ready for issue since late 2007 (see fig. 6), but the Afghan Minister of Interior has delayed distribution of these vehicles until adequate accountability procedures and driver training are established in the target districts. Similarly, as of February 2008, about half of the approximately 17,000 machine guns on hand had not been distributed to the police. Establishing a working judiciary in Afghanistan based on the rule of law is a prerequisite for effective policing. However, in 2005 and 2007, we reported that few linkages existed in Afghanistan between the Afghan judiciary and police, and the police had little ability to enforce judicial rulings. According to State, much of Afghanistan continues to lack a functioning justice system. In addition, according to CSTC-A, the slow rate at which the rule of law is being implemented across Afghanistan inhibits effective community policing. Our analysis of DynCorp’s weekly progress reports from 2007 indicates that police in the field also face persistent problems with pay, corruption, and attacks. Pay problems: 94 percent (49 of 52) of weekly reports contained instances of police experiencing problems with pay. These include numerous examples of police who have not been paid in several months and multiple cases of police who quit their jobs as a result of pay-related problems, thereby potentially leaving their districts more vulnerable to insurgent forces. Our prior work found that one cause for the corrupt practices exhibited by many Afghan police is their low, inconsistently paid salaries. Furthermore, according to State, the Ministry of Interior’s traditional salary distribution process afforded opportunities for police chiefs and other officials to claim a portion of their subordinates’ salaries for themselves. To minimize skimming of salaries, CSTC-A is instituting a three-phase program to pay all salaries into bank accounts via electronic funds transfer by December 2008. According to Defense, electronic funds transfer had been implemented in 12 of 34 provinces as of August 2007. The government of Afghanistan also has decided to increase police salaries to achieve pay parity with the Afghan army. Corruption: 87 percent (45 of 52) of weekly reports contained instances of corruption within the police force or the Ministry of Interior. These include multiple examples of police personnel providing weapons or defecting to the Taliban and several cases of high-ranking officials engaging in bribery or misconduct. Moreover, State documentation notes that one branch of the ANP, the highway police, was disbanded in early 2007 because it was notorious for corruption. However, DynCorp weekly reporting indicates that several thousand highway police were still working and being paid by the Ministry of Interior as of September 2007. The ministry, in conjunction with CSTC-A and the United Nations Assistance Mission in Afghanistan, is currently engaged in an effort to reform and streamline the ANP rank structure according to several criteria, including evidence of previous corruption amongst ANP officers. Attacks: 85 percent (44 of 52) of weekly reports contained instances of attacks against the police. These include numerous cases where police are targeted by suicide bombers or with improvised explosive devices. According to DynCorp, insurgent attacks against the ANP have increased due to greater involvement of the ANP in counterinsurgency operations and the perception that the police are a more vulnerable target than the Afghan army and coalition forces. DynCorp weekly reports do include several instances where police were able to successfully fend off attack; however, they also contain multiple cases of the dangerous working conditions that police face causing difficulties in retaining or recruiting personnel. Recognizing several of the challenges faced by the ANP, Defense began a new initiative in November 2007 to train and equip the Afghan uniformed police. Defense documentation that outlines this initiative acknowledges that the Afghan police lack capability, have been inadequately trained and equipped, and are beset by corruption. To target these and other challenges, Defense introduced the Focused District Development plan in November 2007 to train and equip the uniformed police—those assigned to police districts throughout the country who comprise over 40 percent of the intended ANP end-strength of 82,000. According to Defense, reforming the uniformed police—the immediate face of the Afghan government to citizens at the local level—is the key to the overall reform of the ANP. Under the Focused District Development model, the entire police force of a district is withdrawn from the district and sent to a regional training center to train together for 8 weeks and receive all authorized equipment while their district is covered by the Afghan National Civil Order Police (ANCOP), a specialized police force trained and equipped to counter civil unrest and lawlessness (see fig. 7). The police force then returns to its district, where a dedicated police mentor team provides follow-on training and closely monitors the police for at least 60 days. Defense expects to be able to reconstitute about 5 to 10 districts at a time for the first year of Focused District Development, with each training cycle lasting about 6 to 8 months. Overall, according to State, it will take a minimum of 4 to 5 years to complete the initiative. State documentation indicates that no districts had completed an entire Focused District Development cycle as of March 2008. Until an entire cycle is completed, it will be difficult to fully assess the initiative. However, limited police mentor coverage may complicate efforts to execute this new program. Defense documentation identifies sufficient police mentor teams as the most important requirement for successful reform. However, according to the commanding general of CSTC-A, the ongoing shortfall in police mentors available to work with newly trained district police will slow implementation of the initiative. In addition, a senior Defense official stated that unless the mentor shortage is alleviated, the number of police mentor teams available to provide dedicated training and monitoring will eventually be exhausted. Moreover, according to DynCorp, civilian mentors have an important role in Focused District Development—particularly in providing district-level mentoring—but are not accompanying military mentors into districts that are considered unsafe. Given that one selection criterion for districts is location in unstable areas of the country where better policing might improve the security situation, it is unclear how often civilian mentors will be able to participate in district-level mentoring. Defense documentation also identifies sufficient equipment availability as a requirement for successful reform. According to Defense, adequate equipment is currently on hand to support the Focused District Development initiative. However, given current shortfalls in various ANP equipment items, it is unclear if having sufficient equipment on hand for the initiative may lead to increased equipment shortages for elements of the ANP, such as the border police, that are not currently being targeted through the initiative. Establishing capable Afghan national security forces is critical to improving security in Afghanistan and the U.S. efforts to assist foreign allies and partners in combating terrorism. To date the U.S. has invested billions of dollars in this effort and estimates that billions more will be required to build and sustain the ANSF beyond the existing forces—few of which have been assessed as fully capable of conducting their primary mission. As such, measuring progress and estimating long-term costs are particularly important given that U.S. officials estimate that this mission could exceed a decade. The recommendations in our 2005 report called for detailed Defense and State plans that include clearly defined objectives and performance measures, milestones for achieving these objectives, future funding requirements, and a strategy for sustaining the results achieved, including plans for transitioning responsibilities to Afghanistan. In 2007, Defense provided a 5-page document in response to our recommendation. However, this document included few long-term milestones, no intermediate milestones for judging progress, and no sustainability strategy. In 2008, Congress mandated that Defense, in coordination with State, submit reports on a comprehensive and long-term strategy and budget for strengthening the ANSF and a long-term detailed plan for sustaining the ANSF. Defense has yet to provide these reports. As such it remains difficult to determine if U.S. efforts are on track and how much more they will cost to complete. Until a coordinated, detailed plan is completed, Congress will continue to lack visibility into the progress made to date and the cost of completing this mission—information that is essential to holding the performing agencies accountable. Consequently, we believe that future U.S. investments should be conditioned on the completion of a coordinated, detailed plan for developing a capable ANSF. To help ensure that action is taken to facilitate accountability and oversight in the development and sustainment of the ANSF, and consistent with our previous 2005 recommendation and the 2008 congressional mandate, Congress should consider conditioning a portion of future appropriations related to training and equipping the ANSF on completion of a coordinated, detailed plan that, among other things, includes clearly defined objectives and performance measures, milestones for achieving these objectives, future funding requirements, and a strategy for sustaining the results achieved, including plans for transitioning responsibilities to Afghanistan; and the timely receipt of the reports mandated by sections 1230 and 1231 of Pub. L. 110-181, the first of which are already late. State and Defense provided written comments on a draft of this report. State’s comments are reproduced in appendix III. Defense’s comments, along with GAO responses to selected issues raised by Defense, are reproduced in appendix IV. The agencies also provided us with technical comments, which we have incorporated throughout the draft as appropriate. State appreciated GAO’s views on how to improve efforts to develop the ANSF, which it considers critical to long-term sustainable success in Afghanistan. State expressed concerns about conditioning future appropriations on the completion of a detailed plan. In addition, State highlighted ongoing coordination efforts with Defense as well as certain other operational changes, many of which occurred after the completion of our fieldwork in August 2007. For example, while we note that civilian mentors are not accompanying military mentors into districts that are considered unsafe, State notes in its comments that civilian police mentors are now deployed with their military counterparts to all ongoing Focused District Development districts and that all efforts are made to enable the deployment of civilian police in support of the program. We acknowledge State’s concerns and appreciate its efforts to coordinate with Defense. However, we believe that a coordinated, detailed plan that clearly identifies the various agencies’ roles would be beneficial, given the continuous turnover of U.S. government staff in Afghanistan. We believe a coordinated, detailed plan with intermediate milestones is also important particularly in light of the new Focused District Development initiative for ANP training, which will entail considerable resources and time to complete. Further, intermediate milestones would provide policymakers with more information regarding the transition to a normalized security assistance relationship, as discussed by State in its comments. Defense disagreed that Congress should consider conditioning a portion of future appropriations on completion of a coordinated, detailed plan to develop the ANSF, and stated that current guidance provided by State and Defense to the field is sufficient to implement a successful program to train and equip the ANSF. Defense noted that the 5-page document it provided to GAO in January 2007 articulated goals for the size, capabilities, and requirements for the ANSF, and reflected an approach approved by multiple agencies—including State. Defense also cited a number of other documents it considers to be part of the effort to develop the ANSF. Furthermore, Defense disagreed with our conclusion that, absent a detailed plan, progress in developing the ANSF is difficult to assess, and stated that monthly progress reports and communication with Congress provide legislators with the information needed to assess the program and allocate resources. We do not believe that the 5-page document provides a strategic-level plan for the development of the ANSF. The document does not represent a coordinated Defense and State plan for completing and sustaining the ANSF with sufficient detail and transparency for Congress and others to make informed decisions concerning future resources. This 5-page document, which Defense now refers to as a “Strategic Vision” and which CSTC-A officials were unaware of at the time of our review, does not identify or discuss the roles and responsibilities of the Department of State, Defense’s key partner in training the ANP. This is an element that one would expect in a strategic planning document for ANSF development. Furthermore, the document contains just one date-- December 2008, by which time the 152,000 person ANSF would be completed. Defense’s document lacks any other intermediate or long-term milestones by which progress could be measured. While the U.S. role in training and equipping the ANSF could exceed a decade, according to CSTC-A representatives, neither the 5-page document nor the documents identified by Defense in its comments to GAO constitute a sustainability strategy. For example, while Defense states that the international community will need to sustain the ANSF for the “near-term” until government revenues increase in Afghanistan, the document lacks further detail regarding the expected time frames for increasing government revenues, as well as a definition of “near-term.” As such, it remains unclear how long Defense and State expect to support the ANSF. Furthermore, we maintain that, without a coordinated, detailed plan, assessing progress in developing the ANSF is difficult. While Defense produces various documents that report in detail on the current status of the ANSF, these documents do not contain intermediate milestones or consistent end dates necessary to determine if the program is on track to achieve its desired results within a set timeframe. For additional details, refer to GAO comments that follow appendix IV. We are sending copies of this report to interested congressional committees. We will also make copies available to others on request. In addition, this report is available on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-7331 or johnsoncm@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. To analyze U.S. plans for developing and sustaining the Afghan National Security Forces (ANSF) and identify the extent to which these plans contain detailed objectives, milestones, future funding requirements, and sustainability strategies, we reviewed planning documents from Combined Security Transition Command—Afghanistan (CSTC-A) and the Office of the Secretary of Defense, including draft and CSTC-A-approved versions of the Campaign Plan for the Development of Afghan National Military and Police Forces (Campaign Plan); a planning document provided by the Office of the Secretary of Defense; and a Defense briefing on ANSF sustainment. We evaluated these documents to determine the extent to which they contain the four criteria previously recommended by GAO and discussed them with cognizant Defense officials in the Office of the Secretary of Defense and the Joint Chiefs of Staff. We also spoke with officials from the U.S. Central Command and State’s Bureau of International Narcotics and Law Enforcement Affairs to discuss their contribution to the Campaign Plan. In addition, while in Kabul, we discussed the Campaign Plan with officials from Embassy Kabul; the commanding general of CSTC-A and other CSTC-A officials; and the Afghan Minister of Defense. Finally, we examined the Afghanistan Compact and Afghanistan National Development Strategy to gain familiarity with documents developed by Afghanistan and the international community. The information on foreign law in this report does not reflect our independent legal analysis but is based on interviews and secondary sources. To determine the progress made and challenges faced by the United States in building the Afghan National Army (ANA), we reviewed monthly assessment reports produced by Task Force Phoenix and the Joint Chiefs of Staff as well as documents obtained from several other Defense offices and agencies, including the Office of the Secretary of Defense, CSTC-A, the Defense Security Cooperation Agency, and the U.S. Army Force Management Support Agency. In addition, we met with the following officials to discuss the progress made and challenges faced by the United States in building the ANA: In the Washington, D.C., area, we met with officials from the Joint Chiefs of Staff, the Office of the Secretary of Defense, the Defense Security Cooperation Agency, the U.S. Army Force Management Support Agency, the Defense Intelligence Agency, and State’s Bureau of Political-Military Affairs. In Kabul, Afghanistan, we met with personnel mentoring the army; officials from CSTC-A, including its commanding general; Task Force Phoenix; Embassy Kabul; the North Atlantic Treaty Organization; MPRI; and the Afghan Ministry of Defense, including the Minister of Defense. We also visited an equipment warehouse and army training facilities. Moreover, we interviewed officials based in Afghanistan by telephone, including several CSTC-A representatives. To determine the progress made and challenges faced by the United States in building the Afghan National Police (ANP), we reviewed monthly assessment reports produced by Task Force Phoenix and the Joint Chiefs of Staff as well as documents obtained from several other Defense offices and agencies, including the Office of the Secretary of Defense, CSTC-A, the Defense Security Cooperation Agency, and the U.S. Army Force Management Support Agency. In addition, we met with the following officials to discuss the progress made and challenges faced by the United States in building the ANP: In the Washington, D.C., area, we met with officials from the Joint Chiefs of Staff, the Office of the Secretary of Defense, the Defense Security Cooperation Agency, the U.S. Army Force Management Support Agency, the Defense Intelligence Agency, State’s Bureaus of International Narcotics and Law Enforcement Affairs and Political-Military Affairs, and DynCorp International. In Kabul, Afghanistan, we met with U.S. police mentors; officials from CSTC-A, including its commanding general; Task Force Phoenix; Embassy Kabul; the United Nations; DynCorp International; MPRI; and the Afghan Ministry of Interior, including the Minister of Interior. We also visited an equipment warehouse and police training facilities. Further, we interviewed officials based in Afghanistan by telephone, including representatives of CSTC-A, DynCorp International, and the United Nations Development Programme’s Law and Order Trust Fund for Afghanistan. Additionally, we asked State to provide weekly progress reports produced by DynCorp International for 2005, 2006, and 2007. To identify challenges faced by the police, we conducted a content analysis to categorize and summarize the observations contained in these reports. Specifically, we independently proposed categories, agreed on the relevant categories, reviewed reports, and categorized the observations contained therein. Instances discussed in more than one report were only categorized and counted the first time they appeared. To ensure the validity and reliability of this analysis, we reconciled any differences. Once all differences were reconciled, we analyzed the data to identify the challenges most often discussed. Because State did not provide us a complete set of reports for 2005 or 2006, we were only able to perform this analysis on 2007 reports. To determine the reliability of the data we collected on funding, mentors, equipment, and ANSF personnel numbers and capability, we compared and corroborated information from multiple sources and interviewed cognizant officials regarding the processes they used to compile the data. To determine the completeness and consistency of U.S. and international funding data, we compiled and compared data from Defense, State, and other donor countries with information from cognizant U.S. agency officials in Washington, D.C. We also compared the funding data with appropriations and authorization legislation, congressional budget requests, and reports to Congress to corroborate their accuracy. Additionally, we compared the funding data with our May 2007 Afghanistan report. Differences between table 1 in this report and the funding chart presented in our May 2007 report are due to the following factors: Certain funds were removed, such as those provided to support a protective detail for Afghanistan’s President, because agency officials later clarified that these dollars did not support efforts to train and equip the ANSF, while certain funds were added, such as those used to provide support for counter narcotics police, because agency officials later clarified that these dollars supported efforts to train and equip the ANSF. For fiscal years 2007 and 2008, totals printed in May 2007 included budget requests. Subsequently, some of these requested totals changed, such as the allocation of money in Defense’s 2008 Global War on Terror request and Defense’s support of efforts to train and equip Afghan counter narcotics police. Although we did not audit the funding data and are not expressing an opinion on them, based on our examination of the documents received and our discussions with cognizant agency officials, we concluded that the funding data we obtained were sufficiently reliable for the purposes of this engagement. To determine the reliability of data on the number of military mentors, we corroborated figures in unclassified progress reports against classified mentor requirements and discussed Defense progress reports with the Joint Chiefs of Staff. We checked the reliability of data on the number of civilian mentors by comparing Defense and State figures for consistency and speaking to State officials. Finally, we assessed the reliability of data on European Union police mentors by comparing Defense, State, and European Union data and checking for inconsistencies. Based on these assessments and interviews, we determined that these data on mentors were sufficiently reliable for the purposes of this engagement. To assess the reliability of equipment data, we compared different lists of equipment on hand to corroborate their accuracy and interviewed cognizant officials by telephone to discuss shortages of equipment and procedures for keeping track of equipment provided to the ANA and ANP. Based on these comparisons and discussions, we concluded that the equipment data provided to us were sufficiently reliable for the purposes of this engagement. To assess the reliability of ANSF capability figures, we spoke with officials from CSTC-A, the Joint Chiefs of Staff, and State to discuss the processes by which these data are generated. Additionally, while in Kabul, we attended the monthly meeting during which Defense officials discuss and determine ANA capability figures. Moreover, we requested after-action reports to evaluate the capability of ANA troops in the field. However, Defense officials were not able to provide us with this documentation. To evaluate the reliability of ANSF personnel numbers, we spoke with officials from CSTC-A and the Joint Chiefs of Staff. Overall, based on our discussions with cognizant officials, we concluded that ANSF capability and ANA personnel data were sufficiently reliable for the purposes of this engagement. However, based on concerns expressed by the Joint Chiefs of Staff and highlighted in our prior work, as well as the results of the census conducted by Defense, we note in this report that the number of ANP reported as assigned may not be reliable. Because Defense relies on the number of ANP reported as assigned as a measure of progress in building the ANP, we include this figure in our report as evidence that the ANP appear to have grown in number over the last 3 years. However, we also note that due to concerns about the reliability of the figure, it is difficult to quantify the exact extent to which the ANP has grown. We conducted this performance audit from March 2007 through June 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The Afghan National Security Forces are comprised of the Afghan National Army and Afghan National Police. The structure of these organizations is described below. (See table 10 for the Afghan army and table 11 for the Afghan police.) Combat forces comprise 70 percent of the ANA’s personnel and are divided into five corps, located in different regions of Afghanistan. Each corps contains a number of brigades, most of which consist of five battalions: three light infantry battalions, one combat support battalion, and one combat services support battalion. The exception is the quick reaction force in 201st corps, which is comprised of one infantry battalion, one mechanized infantry battalion, and one armored battalion, in place of the three light infantry battalions. Each corps also includes one battalion of the National Commando Brigade (see fig. 8). The ANP currently consists of six authorized components under the Ministry of Interior. The uniformed police, the largest of these six components, report to the police commanders of each Afghan province. Provincial commanders report to one of five regional commanders, who report back to the Ministry of Interior. The other five authorized components of the ANP all report directly to the ministry (see fig. 9). The following are GAO’s comments on Defense’s written response, dated May 27, 2008, to our draft report. 1. Defense states that its document establishes quantitative and qualitative measures to assess ANSF development. While the 5-page document contains some qualitative measures to assess ANSF development, it contains only one milestone date, December 2008, when, according to the document, the ANSF will have achieved initial independent operating capability. However, this one milestone is not consistent with dates contained in monthly reports that track manning, training, equipment, and capability, which have fluctuated. While the monthly updates are useful in providing the status of ANSF capability, each monthly report is a snapshot in time without consistent baselines that would facilitate an assessment of progress over time. For example, even though the United States began funding and training the ANA in 2002, the February 2007 report that was provided to GAO as an attachment to the 5-page document uses three different baselines for assessing the ANSF—July 2005 for the number of trained and equipped Afghan army and police, June 2006 for the status of the ANA battalion Training and Readiness Assessments, and the first quarter of 2007 for the status of ANA and ANP embedded training teams and mentors. However, the report does not refer back to 2002 in measuring progress. Similarly, the Training and Readiness Assessments that are provided on a quarterly basis to congressional oversight committees are also snapshots in time. 2. Defense maintains that the CSTC-A milestones are consistent with those in the 5-page Defense document. We disagree. The three phases and associated time frames of ANSF development are articulated differently in the 5-page document and the CSTC-A Campaign Plan. For example, Phase III in CSTC-A’s Campaign Plan—Transition to Strategic Partnership—is not identified as a phase in the 5-page document. Defense also contends that differences between the two documents are due to developments in the security environment. While this may be true, absent a detailed plan with specific time frames, it is difficult to assess the extent to which deteriorating security delayed ANSF development. 3. Defense notes that until government revenues increase in Afghanistan, the international community will need to sustain the ANSF and that such international support is required for the “near-term.” Moreover, Defense states that, where appropriate, it supports efforts to increase government revenues in Afghanistan. However, in the absence of further detail regarding the expected timeline for increasing government revenues—or the definition of “near-term”—it remains unclear how long the United States will need to support the ANSF. As we note in our report, the United States has been a major contributor to this mission—investing about $16.5 billion to develop the ANSF. Furthermore, current costs to sustain the force are estimated to be at over $2 billion annually. Given that the Afghan government is currently unable to support the recurring costs of its security forces and that U.S. officials estimate this mission could exceed a decade, additional clarity on the estimated length of time and amount of money needed to complete this mission, and the potential for Afghan financial contributions, could assist in conducting oversight of the program. 4. Defense states that the 5-page document received by GAO was a longer articulation of a plan approved by State. However, although Defense and State are partners in training the ANP, the fact remains State did not participate in the development of the 5-page document Defense provided to GAO, nor has State developed a plan of its own. Defense’s 5-page document does share basic end-strength and capability information with two slides on ANSF development approved by the Principals Committee for ANSF Development. However, these slides do not themselves constitute a coordinated plan and do not contain elements, such as intermediate milestones, identified by GAO in our 2005 recommendation and agreed to by Defense and State as needed. 5. Defense contends that the role of State in ANSF development is articulated in documents other than the 5-page document provided to GAO. However, while State’s role may be discussed elsewhere, the 5- page document provided to us by Defense does not describe the role of State or other key stakeholders. If, as stated, Defense intends this document to provide strategic-level guidance for the development of the ANSF, including in it an articulation of the roles and responsibilities of partners and key stakeholders could assist in implementing and coordinating the program’s efforts. For instance, we note in our report that the dual chain of command between State and Defense has complicated the efforts of civilian mentors assisting with the program. 6. We maintain that, without a detailed plan, assessing progress in developing the ANSF is difficult. While Defense produces various documents that report in detail on the current status of the ANSF, these documents do not contain consistent baseline data, intermediate milestones, or consistent end dates necessary to determine if the program is on track to achieve its desired results within a set time frame. For example, after 6 years and a U.S. investment of about $16.5 billion in the program, Defense status reports show that, as of April 2008, less than 2 percent (2 of 105) ANA units and no ANP units (0 of 433) are rated as fully capable and the estimated completion date of these forces is March 2011 and December 2012, respectively. Defense asserts this is impressive, particularly for the ANA. However, without interim milestones against which to assess the ANSF, it is difficult to know if this status constitutes progress or will allow Defense to meet its currently projected completion dates. Moreover, the completion dates cited by Defense do not constitute firm goals and have shifted numerous times during the course of our review. For instance, in monthly Defense reports dated June 2007, November 2007, and May 2008, completion dates for the ANA fluctuated from December 2008 to September 2010 to March 2011. Likewise, over the same period, completion dates for the ANP fluctuated from December 2008 to March 2009 to December 2012, with a 3-month period when the completion date was reported as “to be determined.” Moreover, as we note in our report, Defense officials stated that completion dates contained in its monthly status reports did not account for shortfalls in the required number of mentors and trainers and, therefore, could be subject to further change. Defense also states that it only began to support independent operations capability for the ANA in 2006. While it is true that planned capability for the ANA was upgraded in 2006, absent a detailed plan, it is unclear the extent to which this planned capability upgrade should be expected to affect the timeline for the development of individual ANA units. Had Defense implemented GAO’s 2005 recommendation to produce such a plan, it might be able to provide more clarity on the relationship between planned capability upgrades and program timelines. Moreover, even though planned ANA capability was upgraded in 2006, prior to that time, the U.S. invested nearly $3 billion to develop the ANA and reported approximately 20,000 troops trained as of May 2005. Absent a plan with performance measures, such as planned capability, linked to intermediate milestones and end dates, it is difficult to assess the results achieved by this financial investment. Key contributors to this report include Hynek Kalkus, Assistant Director; Lynn Cothern; Aniruddha Dasgupta; Mark Dowling; Cindy Gilbert; Elizabeth Guran; Al Huntington; and Elizabeth Repko. Combating Terrorism: U.S. Lacks Comprehensive Plan to Destroy Terrorist Threat and Close Safe Haven in Pakistan’s Federally Administered Tribal Areas. GAO-08-622. Washington, D.C.: April 17, 2008. Combating Terrorism: State Department’s Antiterrorism Program Needs Improved Guidance and More Systematic Assessments of Outcomes. GAO-08-336. Washington, D.C.: February 29, 2008. Operation Iraqi Freedom: DOD Assessment of Iraqi Security Forces’ Units as Independent Not Clear Because ISF Support Capabilities Are Not Fully Developed. GAO-08-143R. Washington, D.C.: Nov. 30, 2007. Securing, Stabilizing, and Rebuilding Iraq: Iraqi Government Has Not Met Most Legislative, Security, and Economic Benchmarks. GAO-07-1195. Washington, D.C.: September 4, 2007. Stabilizing Iraq: DOD Cannot Ensure That U.S.-Funded Equipment Has Reached Iraqi Security Forces. GAO-07-711. Washington, D.C.: July 31, 2007. Securing, Stabilizing, and Reconstructing Afghanistan: Key Issues for Congressional Oversight. GAO-07-801SP. Washington, D.C.: May 24, 2007. Stabilizing Iraq: Factors Impeding the Development of Capable Iraqi Security Forces. GAO-07-612T. Washington, D.C.: March 13, 2007. Afghanistan Drug Control: Despite Improved Efforts, Deteriorating Security Threatens Success of U.S. Goals. GAO-07-78. Washington, D.C.: Nov. 15, 2006. Afghanistan Reconstruction: Despite Some Progress, Deteriorating Security and Other Obstacles Continue to Threaten Achievement of U.S. Goals. GAO-05-742. Washington, D.C.: July 28, 2005. Afghanistan Security: Efforts to Establish Army and Police Have Made Progress, but Future Plans Need to Be Better Defined. GAO-05-575. Washington, D.C.: June 30, 2005. Rebuilding Iraq: Preliminary Observations on Challenges in Transferring Security Responsibility to Iraqi Military and Police. GAO-05-431T. Washington, D.C.: March 14, 2005. Afghanistan Reconstruction: Deteriorating Security and Limited Resources Have Impeded Progress; Improvements in U.S. Strategy Needed. GAO-04-403. Washington, D.C.: June 2, 2004. Foreign Assistance: Lack of Strategic Focus and Obstacles to Agricultural Recovery Threaten Afghanistan’s Stability. GAO-03-607. Washington, D.C.: June 30, 2003. | Since 2002, the United States has worked to develop the Afghan National Security Forces (ANSF). The Department of Defense (Defense), through its Combined Security Transition Command-Afghanistan (CSTC-A), directs U.S. efforts to develop the Afghan National Army (ANA) and, in conjunction with the Department of State (State), the Afghan National Police (ANP). To follow up on recommendations from GAO's 2005 report on the ANSF, GAO analyzed the extent to which U.S. plans for the ANSF contain criteria we recommended. GAO also examined progress made and challenges faced in developing the ANA and ANP. To address these objectives, GAO reviewed Defense, State, and contractor documents and met with cognizant officials. GAO has prepared this report under the Comptroller General's authority to conduct evaluations on his own initiative. In 2005, GAO recommended that Defense and State develop detailed plans for completing and sustaining the ANSF. In 2007, Defense provided a document in response to this recommendation. This 5-page document lacks sufficient detail for effective interagency planning and oversight. For example, while the document includes some broad objectives and performance measures, it identifies few long-term milestones and no intermediate milestones for assessing progress, and it lacks a sustainability strategy. Although Defense and State are partners in police training, the document does not include State's input or describe State's role. Further, State has not completed a plan of its own. In January 2008, CSTC-A completed a field-level plan to develop the ANSF that includes force goals, objectives, and performance measures. While this is an improvement over prior field-level planning, it is not a substitute for a coordinated, detailed Defense and State plan with near- and long-term resource requirements. In 2008, Congress mandated that the Secretary of Defense, in coordination with the Secretary of State, provide a long-term strategy and budget for strengthening the ANSF, and a long-term detailed plan for sustaining the ANSF. These have not been provided. Without a detailed plan, it is difficult to assess progress and conduct oversight of the cost of developing the ANSF. This is particularly important given the limited capacity of the Afghan government to fund the estimated $2 billion per year ANSF sustainment costs for years into the future. The United States has invested over $10 billion to develop the ANA since 2002. However, only 2 of 105 army units are assessed as being fully capable of conducting their primary mission and efforts to develop the army continue to face challenges. First, while the army has grown to approximately 58,000 of an authorized force structure of 80,000, it has experienced difficulties finding qualified candidates for leadership positions and retaining personnel. Second, while trainers or mentors are present in every ANA combat unit, shortfalls exist in the number deployed to the field. Finally, ANA combat units report significant shortages in about 40 percent of equipment items Defense defines as critical, including vehicles, weapons, and radios. Some of these challenges are due in part to competing U.S. global priorities. Without resolving these challenges, the ability of the ANA to reach full capability may be delayed. Although the ANP has reportedly grown in number since 2005, after an investment of over $6 billion, no police unit is fully capable and several challenges impede U.S. efforts to develop the police. First, less than one-quarter of the police have mentors present to provide training in the field and verify that police are on duty. Second, police units continue to face shortages in equipment items that Defense considers critical, such as vehicles, radios, and body armor. In addition, Afghanistan's weak judicial system hinders effective policing and rule of law, and the ANP consistently experiences problems with pay, corruption, and attacks from insurgents. Defense began a new effort in November 2007 to address these challenges, but the continuing shortfall in police mentors may put this effort at risk. |
VA provides health care services to more than 5 million patients annually. This care includes mental health services to veterans in inpatient and outpatient settings in a variety of VA health care facilities including medical centers, CBOCs, and Vet Centers. Mental health services are provided for a range of conditions such as depression, PTSD, and substance abuse disorders. Resources for these and other health care services are allocated by VA headquarters through a general resource allocation system—the Veterans Equitable Resource Allocation (VERA) system—to its 21 health care networks. Although the VERA system is used to allocate funds, it does not designate funds for specific purposes or prescribe how those funds are to be used. In November 2004, the Secretary of VA approved the mental health strategic plan. This mental health strategic plan contained recommended initiatives for improving VA mental health services by addressing a range of issues, including, for example, improving awareness about mental illness and filling gaps in access to mental health services. Some of the service gaps identified were in treating veterans with serious mental illness, female veterans, and veterans returning from combat in Iraq and Afghanistan. Within VA, the Office of Mental Health Services (OMHS) is responsible for coordinating with the networks and medical centers on the overall implementation of the mental health strategic plan. This includes formulating strategies for allocating funds to medical centers and certain offices for plan initiatives. Such strategies include, for example, the use of RFPs to decide how the mental health strategic plan funds are to be allocated to medical centers. VA headquarters allocated $88 million of the $100 million that VA officials said would be used for mental health strategic plan initiatives in fiscal year 2005 by using several approaches. About $53 million was allocated directly to medical centers and certain offices and $35 million was allocated through its general resource allocation system to its health care networks, according to VA officials. The remaining $12 million of the $100 million was not allocated by any approach, headquarters officials said, because there was not enough time during the fiscal year to allocate the funds. Officials we interviewed at 7 medical centers in 4 networks reported using allocated funds to provide new mental health services and to provide more of existing services. However, some medical center officials reported that they did not use all allocated funds for plan initiatives by the end of the fiscal year, due in part to the length of time it took to hire new staff. VA headquarters allocated about $53 million directly to medical centers and certain offices based on proposals submitted for funding and other approaches targeted to specific initiatives related to the mental health strategic plan in fiscal year 2005. VA headquarters developed and solicited submissions from networks for specific initiatives to be carried out at their individual medical centers through requests for proposals (RFPs). VA made resources available through these RFPs and other targeted approaches to medical centers for plan initiatives to support a range of specific mental health services based, in part, on the priorities of VA leadership and legislation for programs related to PTSD, substance abuse, and other mental health areas, according to VA headquarters officials. Nearly $20 million of the $53 million allocated by using RFPs and other targeted approaches was for mental health services related to legislation, according to VA officials. Most of the approximately $53 million allocated—about $48 million—went to VA medical centers. PTSD services and OEF/OIF veterans’ mental health care received an allocation of about $18 million, with Compensated Work Therapy (CWT) receiving the second highest total—nearly $10 million. Other initiatives receiving funding included substance abuse services, mental health services in nursing homes, domiciliary expansion, and psychosocial rehabilitation for veterans with serious mental illness. VA headquarters issued five RFPs from October 2004 to January 2005 that described the specific types of services for which mental health strategic plan funding was available. Review panels headed by mental health experts within VA reviewed the proposals, ranked them, and provided their rankings to VA’s leadership. Once funding decisions were made, VA allocated funding directly to the medical centers for the mental health strategic plan initiatives. VA also used other funding approaches targeted to specific initiatives. For example, headquarters officials allocated funding to medical centers to expand mental health services at CBOCs that had fewer mental health visits than a standard VA identified for this purpose. VA also used other targeted funding approaches to determine which medical centers would receive some of the funds for PTSD, OIF and OEF veterans’, and substance abuse services. In addition, VA targeted funds to mental health initiatives in Polytrauma Centers—centers within certain VA medical centers that provide specialized treatment for veterans of OIF and OEF who have complex rehabilitation needs. VA headquarters officials said that allocations made for initiatives in fiscal year 2005 through RFPs and other approaches targeted to specific initiatives would be made for a total of 2 to 3 fiscal years. These officials said they anticipated that medical centers would hire permanent staff whose positions would need to be funded for more than 1 year. The expectation of VA leadership was that after funds allocated through these approaches were no longer available, medical centers would continue to support these programs using their general operating funds received through VA’s general resource allocation system. VA allocated $35 million for mental health strategic plan initiatives in fiscal year 2005 through its general resources allocation system to its health care networks, according to VA headquarters officials. The decision to allocate these resources to VA’s networks for mental health strategic plan initiatives was retrospective and VA did not notify networks and medical centers of this decision. Although VA headquarters made fiscal year 2005 general resource allocations to the networks in December 2004, the decision that $35 million of the funds allocated at that time were for mental health strategic plan initiatives was not finalized until April 2005, several months after the general allocation had been made. VA headquarters officials said that they made the decision to allocate $35 million from the general resource allocation system because these resources would be more rapidly allocated than if they had been allocated through RFPs. However, other VA headquarters officials told us that the decision was also made, in part, because VA did not have sufficient unallocated funds remaining after the December 2004 general allocation to fund $100 million for mental health strategic plan initiatives through RFPs and other targeted approaches. VA headquarters officials, as well as network and medical center officials, indicated that there was no guidance to the networks and medical centers instructing them to use specific amounts from their general fiscal year allocation for mental health strategic plan initiatives. Network and medical center officials we spoke with were unaware that any specific portion of their general allocation was to be used for mental health strategic plan initiatives. Several VA medical center officials noted, however, that some of the funds in their general allocation were used to support their mental health programs generally, as part of their routine operations. However, because network and medical center officials we interviewed did not know that funds had been allocated for mental health strategic plan initiatives through VA’s general resource allocation system, nor did VA headquarters notify networks and medical centers throughout VA of this retroactive allocation, it is likely that some of these funds were not used for plan initiatives. VA did not allocate the approximately $12 million remaining of the $100 million planned for mental health strategic plan initiatives in fiscal year 2005 because, according to VA headquarters officials, there was not enough time during the fiscal year to allocate the funds through the RFP process or other approaches targeted to specific initiatives. Officials said that when resources were allocated later in the fiscal year through an RFP rather than at the beginning, the amount allocated was only a portion of the annualized cost. The full annualized cost could be supported in the next fiscal year. For example, if a project with an annual cost of $4 million was allocated mid way through the fiscal year, only half the annual cost was allocated at that time—-$2 million. The expectation was that the full $4 million would be available for the project over 12 months in the next fiscal year. The $12 million that VA did not allocate for fiscal year 2005 was intended for certain mental health strategic plan initiatives based on an allocation plan developed by VA for the $65 million it planned to allocate through RFPs and other approaches. VA headquarters officials said that funds not allocated for mental health strategic plan initiatives were allocated for other health care purposes. Officials we interviewed from seven medical centers in four networks reported using funds allocated to them for mental health strategic plan initiatives through RFPs and other targeted approaches, but they said that some of these funds were not used for plan initiatives in fiscal year 2005. Officials said they used funds allocated to provide new mental health services and to provide more of existing services included in plan initiatives. For example, two medical centers used funds to increase the number of mental health providers available at CBOCs. One of those medical centers also implemented a new 6-week PTSD day treatment program in which veterans live in the community but come to the medical center daily for counseling, group therapy, and other services. Officials at some medical centers reported that they were not able to use all of their fiscal year 2005 funding for plan initiatives by the end of the year as planned and cited several reasons that contributed to this situation. The length of time it takes to recruit new staff in general and the special problems of hiring specialized staff, such as psychiatrists, were cited. In some cases the need to locate or renovate space for programs contributed to delays in using mental health strategic plan funds, according to medical center officials. Medical centers varied in how they treated fiscal year 2005 funds that were allocated by VA for mental health strategic plan initiatives but not used for those initiatives. Some reported that they carried over the funds for use in the next fiscal year. Officials at some medical centers reported that they used these funds for other health care purposes. For example, officials at one medical center said they used funds that they did not spend on mental health strategic plan initiatives to support other mental health programs. VA headquarters officials advised participants from networks and medical centers in a weekly conference call in August 2005 that if they were unable to hire staff for initiatives in fiscal year 2005, they should use the funds allocated only for mental health services. As of September 20, 2006, VA headquarters had allocated $158 million of the $200 million to be used for VA mental health strategic plan initiatives in fiscal year 2006 by using several approaches. About $92 million of these funds was allocated directly to medical centers and certain offices to support new mental health strategic plan initiatives for fiscal year 2006. VA also allocated about $66 million to support the recurring costs of the continuing mental health initiatives that were funded in fiscal year 2005. The remaining $42 million had not been allocated as of September 20. Officials at some medical centers expected to spend all of the allocations they received during fiscal year 2006. However, officials at some medical centers were uncertain that they would spend all their allocations for plan initiatives during the fiscal year. VA headquarters had allocated about $158 million directly to medical centers and certain offices by September 20, 2006, through RFPs and other approaches targeted to specific initiatives related to the mental health strategic plan in fiscal year 2006. About $92 million was for new mental health strategic plan activities, and about $66 million was to support the recurring costs of continuing mental health strategic plan initiatives that were first funded in fiscal year 2005. As in fiscal year 2005, the new resources went to support a range of mental health services in line with priorities of VA’s leadership and legislation, according to VA officials. Funding for services for PTSD, OIF and OEF veterans, substance abuse, and CBOC mental health services accounted for nearly three-fifths of the funds allocated for new initiatives. As of September 18, 2006, VA had not allocated resources for mental health strategic plan initiatives through its general resource allocation system and VA headquarters officials said VA was not planning to do so. As of September 20, 2006, VA did not allocate about $42 million of the $200 million planned for mental health strategic plan initiatives in fiscal year 2006 by any approach. VA officials said that a portion of these unallocated funds are related to the timing of allocations that were made for plan initiatives through RFPs and other funds targeted to medical centers. Specifically, some of the allocations through RFPs were made well into the fiscal year. VA allocated only the amount of funds through these approaches for fiscal year 2006 that would fund the projects through the end of the fiscal year, and not the full 12-month cost which VA expects to fund in fiscal year 2007. Because some of these allocations were made in the later part of fiscal year 2006, these allocations were smaller than they would be on a 12-month basis and accounted for part of the $42 million not allocated. VA officials said they anticipated that these funds would be available in fiscal year 2007. Officials from seven medical centers we interviewed in May and June of 2006 reported using funds allocated to them through RFPs and other approaches to support new 2006 mental health initiatives and to continue to support the initiatives first funded in fiscal year 2005. For example, one medical center used funding for a new mental health intensive case management program. Officials at some medical centers reported that they did not anticipate problems using all of the funds they had received in fiscal year 2006. However, others were less certain they would be able to use all of the funds. Officials at several medical centers were not sure they would be able to hire all of the new staff related to mental health strategic plan initiatives by the end of the fiscal year. In May 2006, officials at two medical centers that we interviewed said that they did not know whether they would receive additional funds through RFPs to spend in fiscal year 2006, and as a result they were uncertain whether they would be able to use all of their fiscal year 2006 funds for plan initiatives by the end of the fiscal year. Our preliminary findings show that VA allocated additional resources for mental health strategic plan initiatives in fiscal years 2005 and 2006 to help address identified gaps in VA’s mental health services for veterans. VA intended to allocate $100 million for plan initiatives in fiscal year 2005. The allocations that were made resulted in some new and expanded mental health services to address gaps, according to officials at selected medical centers. However, approximately $12 million of the $100 million was not allocated by any method and $35 million was allocated through VA’s general resource allocation system on a retrospective basis and without notifying networks and medical centers that resources for plan initiatives had been allocated in the general allocation that networks received several months earlier. Finally, some portion of the approximately $53 million that was allocated directly to medical centers was not used for plan initiatives in part because the timing of the allocation of the funds did not leave time to hire needed staff by the end of the fiscal year. As a result, it is likely that a substantial portion of the $100 million intended for mental health strategic plan funds in fiscal year 2005 was not used for plan initiatives. A larger amount of the planned mental health strategic plan funds was allocated in fiscal year 2006, although as of September 20, 2006, about a fifth of the $200 million planned for these initiatives was not allocated. However, it is unclear whether medical centers will be able to spend all of the fiscal year 2006 mental health strategic plan funds for plan initiatives by the end of the year, in part because of how late in the year the funds were allocated. For further information about this statement, please contact Laurie E. Ekstrand at (202) 512-7101 or ekstrandl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. James Musselwhite, Assistant Director, and Robin Burke made key contributions to this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The Department of Veterans Affairs (VA) provides mental health services to veterans with conditions such as post-traumatic stress disorder (PTSD) and substance abuse disorders. To address gaps in services needed by veterans, VA approved a mental health strategic plan in 2004. VA planned to increase its fiscal year 2005 allocations for plan initiatives by $100 million above fiscal year 2004 levels. VA also planned to increase its fiscal year 2006 allocations for plan initiatives by $200 million above fiscal year 2004 levels--composed of $100 million for continuation of fiscal year 2005 initiatives and an additional $100 million identified in the President's fiscal year 2006 budget request. GAO was asked to provide preliminary information on VA's allocation and use of funding for mental health strategic plan initiatives in fiscal years 2005 and 2006. A report on this work will be issued later in the fall of 2006. GAO reviewed VA reports and documents on mental health strategic plan initiatives and conducted interviews with VA officials from headquarters, 4 of 21 health care networks, and 7 medical centers. VA delegates decision making to its health care networks for most budget and management responsibilities regarding medical center operations, and medical centers receive most of their resources from the networks. In fiscal year 2005, VA headquarters allocated $88 million of the $100 million VA officials intended for mental health strategic plan initiatives. VA allocated about $53 million directly to medical centers and certain offices based on proposals submitted for funding and other approaches targeted to specific initiatives. VA solicited submissions from networks for specific initiatives to be carried out at their individual medical centers through requests for proposals (RFPs). In addition, VA headquarters officials said that VA allocated $35 million for this purpose through VA's general resource allocation system to its 21 health care networks on a retrospective basis. VA made this decision several months after resources had been provided to the networks through the general allocation system. Moreover, VA did not notify network and medical center officials that these funds were to be used for plan initiatives. Health care network and medical center officials interviewed told GAO that they were not aware these allocations had been made. As a result, it is likely that some of these funds were not used for plan initiatives. Moreover, VA did not allocate the approximately $12 million remaining of the $100 million for fiscal year 2005 because, according to VA officials, there was not enough time during the fiscal year to do so. Medical center officials said they used the funds allocated directly to their medical centers for plan initiatives that included new mental health services and more of the services they already provided. For example, two medical centers used funds allocated to them through RFPs or other targeted approaches to increase the number of mental health providers at community based outpatient clinics. One of those medical centers also started a new 6-week PTSD day treatment program. However, some medical center officials reported that they did not use all funds allocated for plan initiatives by the end of fiscal year 2005, due in part to the length of time it took to hire new staff. In fiscal year 2006, as of September 20, 2006, VA headquarters had allocated $158 million of the $200 million planned for mental health strategic plan initiatives. VA allocated about $92 million of these funds directly to medical centers and certain offices to support new initiatives, using RFPs and other targeted funding approaches. VA also allocated about $66 million to support recurring costs of the continuing initiatives from the prior fiscal year. As of September 20, 2006, about $42 million of the $200 million for fiscal year 2006 had not been allocated. Officials from seven medical centers we interviewed reported that they had used funds for plan initiatives, such as the creation of a new intensive mental health case management program. Officials at some medical centers reported that they did not anticipate problems using all of the funds allocated to them through RFPs and other targeted approaches in fiscal year 2006. However, officials at other medical centers were less certain that they would use all of these funds for plan initiatives by the end of fiscal year 2006. GAO discussed the information in this statement with VA officials who agreed that the data are accurate, and provided updated data which are incorporated as appropriate. |
To describe the process and significant events leading up to the Commission’s selection of the PCAOB’s first members, to determine what the SEC Chairman knew about the involvement of Judge Webster in U.S. Technologies, and to determine whether the Chairman had withheld information from the other Commissioners prior to the Commission’s vote, we reviewed thousands of internal documents. These documents included plans, memorandums, and correspondence between and among the Chairman, the Commissioners, the SEC Chief Accountant, other SEC staff involved in the PCAOB selection process, and outside parties. We also used this information to corroborate and verify testimonial evidence collected through extensive interviews of the SEC Chairman; each Commissioner; the Chief Accountant; the General Counsel; the Chairman’s Chief of Staff; the PCAOB appointees; and John H. Biggs, who was among those considered for the PCAOB chairmanship. We also obtained information from staff in SEC’s Office of the Chief Accountant, the Office of the General Counsel, the Division of Enforcement, the Division of Corporation Finance, and the Office of the Chairman and others within and outside SEC. Finally, we were benefited greatly from technical support and assistance provided by staff in SEC’s Office of the Inspector General. We reviewed and used information collected and assembled by staff of the Office of the Inspector General throughout our review. A determination and assessment of the details of Judge Webster’s involvement in U.S. Technologies was beyond the scope of this review. To determine what vetting of appointees took place, we reviewed the SEC Office of the General Counsel’s proposed vetting process for the PCAOB appointees and other relevant documentation. We also interviewed SEC’s General Counsel and the PCAOB appointees to obtain information about the types of information they provided prior to their appointment to the PCAOB. Likewise, to determine what aspects of SEC’s selection and vetting process contributed to the breakdown of the process, we attempted to identify other applicable appointment models to identify common elements of those models with which to compare the process followed by SEC in selecting and vetting PCAOB appointees. Finally, we obtained views from those we interviewed about recommended process improvements. We conducted our work in New York, New York, and Washington, D.C., in November and December 2002 in accordance with generally accepted government auditing standards. The act specifies that the PCAOB is to consist of five, full-time members, with one being designated as the chairman. According to the act, each PCAOB member is to have a demonstrated commitment to the interests of investors and the public, an understanding of issuers’ financial disclosure requirements, and an understanding of the obligations of accountants with respect to the preparation of audit reports. The act also specifies that two, but no more than two, members be certified public accountants (CPA). PCAOB members generally are expected to serve 5-year terms. However, to establish staggered terms of office, the terms of office of the initial PCAOB expire in annual increments, ranging from 1 to 5 years, with the chairman serving a 5-year term. Although its activities are subject to SEC oversight and approval, the PCAOB is an independent board with sweeping powers and authority. It has the authority to register public accounting firms that prepare audit reports for companies that issue securities to the public (issuers); establish rules for auditing, quality control, independence, and other standards relating to the preparation and issuance of audit reports for issuers; conduct inspections of registered public accounting firms and associated conduct investigations and disciplinary proceedings and, where justified, impose appropriate sanctions upon registered public accounting firms and associated persons; perform other duties or functions determined necessary or appropriate to promote high, professional standards among public accounting firms and associated persons; enforce compliance with the act, the rules of the PCAOB, professional standards, and the securities laws relating to the preparation and issuance of audit reports by registered public accounting firms and associated persons; and set the budget and manage the operations of the PCAOB and its staff. The newly created PCAOB is to be structured as a nonprofit corporation that is funded by fees assessed on public companies. The act specifies that PCAOB members, employees, and agents are not considered employees of the federal government. The act requires SEC to appoint PCAOB members and verify that the organization meets its statutory responsibilities. Specifically, the act requires that SEC, in consultation with the Chairman of the Board of Governors of the Federal Reserve System and the Secretary of the Treasury, appoint the initial five-member board within 90 days of the act’s passage—that is, by October 28, 2002. Within 270 days of enactment, SEC is to determine that the PCAOB has taken actions necessary to carry out its mission. These actions include hiring staff, proposing rules, and adopting initial and transitional auditing and other professional standards. Within 180 days of SEC’s determination that the PCAOB is meeting its statutory responsibilities, any public accounting firm that is not registered with the PCAOB may not participate in the preparation or issuance of any audit report for any public company that issues securities to the public. SEC is an independent agency comprising five presidentially appointed commissioners, 4 divisions, and 18 offices. In total, SEC has approximately 3,100 staff. SEC is headquartered in Washington, D.C., and it has 11 regional and district offices throughout the country. To ensure that the Commission remains nonpartisan, no more than three commissioners may belong to the same political party. The President also designates one of the commissioners as chairman, the SEC’s top executive. The commissioners meet to discuss and resolve a variety of issues that staff bring to their attention. At these meetings, the commissioners interpret federal securities laws, amend existing rules, propose new rules to address changing market conditions, and/or take action to enforce rules and laws. These meetings are open to the public and the news media, unless the discussion pertains to confidential subjects such as whether to begin an enforcement investigation. Faced with appointing five members to the newly created PCAOB in 90 days, SEC lacked a formalized and tested process that documented the roles to be played by the Commissioners and staff. The SEC Chairman initially asked the Chief Accountant to take the lead in identifying potential PCAOB members; however, the other Commissioners never fully endorsed this approach. A lack of consensus among the Commissioners and a lack of staff direction and communication resulted in SEC’s failure to find a slate of candidates that would elicit a unanimous vote from the Commission. Moreover, these events ultimately resulted in SEC appointing members to the PCAOB that had not been fully vetted. In requiring SEC to appoint members to the PCAOB within 90 days, the act posed a unique challenge for SEC. SEC had not in recent history conducted a similar selection process; therefore, it lacked formalized and tested procedures that were familiar to the Commissioners and SEC staff. The actual process used to appoint PCAOB members was not documented and evolved as the statutory deadline for appointing members approached. Upon passage of the act, the Chairman designated the SEC’s Chief Accountant to lead the search for and identification of PCAOB nominees, with assistance from the General Counsel, who was assigned to vet the candidates. The Chief Accountant began identifying potential candidates for the PCAOB from a wide range of sources, including current and prior Commissioners, Members of Congress, government officials, regulatory organizations, trade associations, and industry leaders. SEC also solicited input from the public through an August 1, 2002, release asking for nominations and applicants willing to serve on the PCAOB. As required by the act, early in the process, the SEC Chairman began to consult with the Chairman of the Board of Governors of the Federal Reserve System and the Secretary of the Treasury to obtain their input and suggestions for potential PCAOB candidates. Early in the selection process, the SEC Chairman’s goal was to find an outstanding candidate as chairman, an individual of great stature who could reassure investors and receive unanimous support from the Commission. The SEC Chairman initially planned that he, along with a Democratic Commissioner and the Chief Accountant, would approach candidates for the chairmanship. The Chairman said that he believed this would help make the process bipartisan. The SEC Chairman wanted the Chief Accountant to participate because he was the person within SEC who would have the most contact with the PCAOB chairman; therefore, he needed to be comfortable with the selection. However, at least one Commissioner told us that the reason for this approach was neither communicated to him nor fully understood by him. Given that the nominees were being considered for service on a board that was designed to help restore investor confidence in financial reporting systems and to clean up perceived problems in the accounting profession, the SEC Chairman said that the PCAOB, and thus each of its members, must be beyond reproach. To achieve that end, the Chairman asked the General Counsel to vet nominees and, at a minimum, identify any significant potential problems or conflicts, real or perceived, involving accounting and other related issues. The General Counsel said that he saw his role as working with the Office of the Chief Accountant to develop an application to collect financial and background information from appointees, to select a contractor to conduct background checks on the appointees, and to identify other steps to vet the slate of candidates selected by the Commission. The staff initially planned to have the slate of potential PCAOB candidates determined by the end of September, which the General Counsel thought would have provided time to do at least some vetting of the appointees before the October 28 deadline. It is unclear whether the other Commissioners were informed of or fully endorsed this plan; some of the Commissioners wanted more involvement in the process and thought it best for each Commissioner independently to do due diligence on potential candidates. This selection strategy broke down when the Commissioners, lacking a documented and formalized process, were unable to agree upon and follow a strategy to identify, vet, and select members to the PCAOB and attract a consensus candidate to serve as chairman. In August 2002, according to those involved in the process, Paul A. Volcker, the former Chairman of the Board of Governors of the Federal Reserve System, emerged as the consensus choice for PCAOB chairman. The SEC Chairman, a Democratic Commissioner, and the Chief Accountant tried throughout August to persuade Mr. Volcker to consider serving as PCAOB chairman. The SEC Chairman also asked the Secretary of the Treasury, the Chairman of the Board of Governors of the Federal Reserve System, and others to assist him in persuading Mr. Volcker. In early September, Mr. Volcker declined to be considered for appointment, in part because the full-time nature of the position required him to give up outside interests that were important to him. In September, the SEC Chairman, the Democratic Commissioner, and the Chief Accountant shifted their focus to Mr. Biggs, the retiring Chief Executive Officer of Teachers Insurance and Annuity Association - College Retirement Equities Fund (TIAA-CREF). On September 11, the Chairman, the Democratic Commissioner, and the Chief Accountant met with Mr. Biggs to discuss his interest in serving on the PCAOB. According to those involved, the purpose of the meeting was to persuade Mr. Biggs to agree to be considered for the chairmanship of the PCAOB. At this meeting, the Chairman and the Democratic Commissioner in attendance told Mr. Biggs that he would have their support. However, the SEC Chairman also stated that his final decision would rest in what he hoped would be a unanimous decision by the Commission. Mr. Biggs said that he told the SEC Chairman that he would only serve on the PCAOB if he were appointed its chairman. The following week, Mr. Biggs called the Chairman and the Chief Accountant to say that he was willing to be considered. On September 24, Mr. Biggs met with a third Commissioner who also gave his support, thereby giving Mr. Biggs enough votes for a majority. Mr. Biggs subsequently met with the remaining two Commissioners and other SEC staff on September 27. For the Chairman, support of Mr. Biggs was contingent upon another specific individual being appointed to the PCAOB. Therefore, when one of the Commissioners informed the Chairman (around Sept. 27) that another Commissioner might not be willing to support that individual, the Chairman became less willing to support Mr. Biggs. The SEC Chairman continued to discuss throughout September other candidates who could potentially serve as chairman or members of the PCAOB. Although potential appointees to the PCAOB had been the subject of ongoing media speculation, on October 1, a newspaper article indicated that Mr. Biggs had “agreed to be the first head of a new regulatory oversight board for the accounting profession.” According to those we interviewed, this article upset some of the Commissioners because it said that the job had been offered to Mr. Biggs. Some of the Commissioners said that the article made them feel that their vote was irrelevant to the selection of the chairman. The SEC Chairman telephoned Mr. Biggs on October 2 and informed him that the October 1 article had “complicated things” and threatened the Chairman’s desire to achieve a unanimous vote. Although the article reported that Mr. Biggs declined to be interviewed, the article, together with a subsequent article that appeared on October 4, led some of the Commissioners to believe that Mr. Biggs was the source of the information included in the articles, directly or indirectly. As a result, some of the Commissioners raised serious questions about Mr. Biggs’s independence, judgment, and ability to effectively work on the PCAOB. At this point, the Commission became divided, with at least one Commissioner willing to support only Mr. Biggs as the chairman and others who strongly opposed Mr. Biggs’s nomination as chairman. SEC’s Chairman and Chief Accountant said that they originally planned for the Commissioners to meet with only about five to seven PCAOB candidates, who would be identified by the Chief Accountant. Again, this approach was not communicated to or endorsed by all of the Commissioners. Therefore, in late September, with time running out and little progress made in selecting candidates, the selection process changed. At the urging of one of the Commissioners, the Chief Accountant and each of the Commissioners began to interview candidates. In total, each Commissioner interviewed about 25 candidates for the PCAOB from late September to October. Although the SEC Chairman and the Chief Accountant were considering a number of candidates, Judge Webster, former Director of the Federal Bureau of Investigation (FBI) and the Central Intelligence Agency, emerged as a leading candidate for PCAOB chairman. Although his name had surfaced in early August along with others, he was not seriously pursued at that time. According to Judge Webster, the SEC Chairman first contacted him on September 27 about considering a position on the PCAOB and later sent him some background material. On October 15, Judge Webster met with the SEC Chairman, the Chief Accountant, and the SEC Chairman’s Chief of Staff, who urged Judge Webster to consider serving as PCAOB chairman. They discussed a number of items at this meeting. At some point during the meeting, the Chairman said that there was one reason for Judge Webster not to consider the position, which was that Judge Webster’s nomination would be criticized by some and that he could be attacked in the media. According to those in attendance, Judge Webster said that he had been confirmed by the Senate for other federal posts on five occasions and nothing in his past would pose a problem. He added that people might make something out of the fact that he was the former chairman of the audit committee of the board of directors of U.S. Technologies, a company that he described as on the brink of failure. According to Judge Webster, he also asked the SEC officials at that meeting to check SEC’s records to see if they indicated any problems relating to U.S. Technologies. As discussed in detail in the next section, an initial review of this matter conducted by staff in SEC’s Office of the Chief Accountant did not reveal, in the Chief Accountant’s opinion, any disqualifying problems involving Judge Webster’s role in the company. Based on the information he obtained, the Chief Accountant passed along information to the Chief of Staff, indicating that there was no problem with Judge Webster’s involvement in U.S. Technologies. The Chief of Staff communicated that message to the SEC Chairman. Neither the information provided by Judge Webster nor collected by the Chief Accountant was provided to SEC’s General Counsel for vetting purposes. On October 21, Judge Webster met with the SEC Chairman and the Chief Accountant to discuss the position further. According to Judge Webster, the Chief Accountant and the SEC Chairman independently told Judge Webster on October 22 or 23 that his involvement with U.S. Technologies would not be a problem. Judge Webster also spoke, in person or on the telephone, with the other Commissioners and the General Counsel on or around October 22, but U.S. Technologies was not mentioned or discussed. Late in the afternoon of October 23, Judge Webster agreed to have his name considered for PCAOB chairman. The SEC Chairman and the Chief Accountant finalized the choices for the other members of the PCAOB and developed a five-member slate on October 24. On that day, in part due to concerns about a leak to the press, the draft slate was not shared with the full Commission. However, the Secretary of the Treasury and the Chairman of the Board of Governors of the Federal Reserve System were informed of the draft slate on October 24, and at the request of the SEC Chairman, they signed a joint letter endorsing Judge Webster and the other members on the slate. There was additional research into Judge Webster’s involvement with U.S. Technologies after Judge Webster agreed to have his name submitted for consideration on October 23. On October 24, the Chief Accountant received a draft newspaper article, which mentioned that Judge Webster had served on the board of directors of several companies, including U.S. Technologies. This prompted the Chief Accountant to ask one of his staff to do some additional follow up on any open or closed enforcement activity concerning U.S. Technologies. This review also included a review of certain corporate disclosures filed with SEC by U.S. Technologies, including documents indicating that the company had dismissed its external auditor a month after material internal control weaknesses were reported. The Chief Accountant received this information on the morning of October 25, a few hours before the scheduled open meeting of the Commission. Again as discussed in detail in the next section, in the opinion of the Chief Accountant, this review revealed nothing that would have disqualified Judge Webster as a nominee. Therefore, the Chief Accountant did not pass on any information about U.S. Technologies or Judge Webster’s role to the SEC Chairman or the other Commissioners to consider prior to their vote to appoint members to the PCAOB. He also did not share this information with the General Counsel. The SEC Chairman said that he and the Commissioners had planned to vote seriatim—whereby the slate of nominees would be passed among the Commissioners for signature—on Thursday, October 24, rather than holding an open Commission meeting. However, on October 23, one of the Commissioners requested an open meeting. On the morning of the October 25 vote, the Office of the Chief Accountant provided the Commissioners with the slate of names for the PCAOB and formally notified them that vetting would occur post-appointment. At the open meeting, one Commissioner voted against all of the board nominees, stating that the selection process was inept and seriously flawed. Another Commissioner voted against Judge Webster, stating that he was not as qualified for the post as Mr. Biggs, but voted in favor of the remaining slate. The SEC Chairman and the remaining two Commissioners voted in favor of the slate of five. Judge Webster therefore was approved by a vote of three to two, and the remaining PCAOB nominees were approved by a vote of four to one. Staff in the Office of the Chief Accountant continued to research matters associated with U.S. Technologies from the morning of the vote into the week of October 28. On October 31, allegations emerged that the SEC Chairman, before the October 25 vote, withheld from his fellow Commissioners material information about Judge Webster’s role at U.S. Technologies, which was relevant to the appointment of Judge Webster as chairman of the PCAOB. Later that same day, the SEC Chairman and another Commissioner separately called the SEC Inspector General to investigate these allegations. The SEC Chairman also asked the SEC Office of the General Counsel to conduct an investigation into Judge Webster’s involvement with U.S. Technologies. Amid the subsequent controversy, the SEC Chairman announced his intention to resign on November 5, the Chief Accountant announced his resignation on November 8, and Judge Webster resigned from the PCAOB on November 12, effective upon the appointment of a new chairman. To date, the PCAOB has had two planning meetings, which have included Judge Webster. The PCAOB is expected to hold its first official meeting on January 6, 2003, at which time members’ terms officially begin. At this time, no acting chairman or replacement chairman has been appointed to the PCAOB. See appendix I for a more detailed chronology of major events. As discussed above, the Office of the Chief Accountant performed two reviews into Judge Webster’s involvement in U.S. Technologies prior to his appointment as PCAOB chairman. According to those in attendance, in an October 15 meeting, which included the SEC Chairman, the Chief Accountant, and the Chairman’s Chief of Staff, Judge Webster mentioned that he had formerly served as chairman of the audit committee of the board of directors of U.S. Technologies, a company on the brink of failure. He said that he asked SEC officials at that meeting to check SEC’s records to see if they indicated any problems relating to U.S. Technologies.According to the SEC Chairman, he told Judge Webster that they would contact him if any problems were found. Following this meeting, the SEC Chairman asked the Chief Accountant to look into U.S. Technologies. No one who attended the meeting contacted SEC’s General Counsel, who was responsible for vetting PCAOB candidates. Instead, the Chief Accountant asked his secretary to follow up on whether there were any open or closed SEC investigations of the company. Contact with the Division of Enforcement revealed that SEC was looking into allegations of misconduct by an officer of U.S. Technologies, not Judge Webster, involving a Schedule 13D filed in 1999. Staff in the Office of the Chief Accountant received information from Enforcement staff that led them to believe that Enforcement staff expected to close the matter. Because the matter involved an officer of U.S. Technologies and not the company directly nor the activities of its board of directors, the Chief Accountant concluded that this did not affect Judge Webster’s nomination to serve as chairman of the PCAOB. According to the Chief Accountant, he passed along information from Enforcement staff to the SEC Chairman’s Chief of Staff that indicated there was no problem as a result of Judge Webster’s involvement with U.S. Technologies, and the Chief of Staff reported the same to the SEC Chairman. According to Judge Webster, the SEC Chairman and Chief Accountant independently informed him on October 22 or 23 that his involvement in U.S. Technologies would not pose a problem. The SEC Chairman said that he recalled contacting Judge Webster, but the Chief Accountant said that he did not recall contacting Judge Webster. There was a second inquiry into U.S. Technologies and Judge Webster by the Office of the Chief Accountant. This inquiry was prompted late on October 24 when the Chief Accountant reviewed a draft newspaper article, which mentioned that Judge Webster had formerly served on the board of directors of U.S. Technologies and had served as the chairman of its audit committee until July 2002. The Chief Accountant asked one of his staff to do some additional follow up but indicated that he thought it was “clean” on the basis of the initial review. This second review, as described in greater detail in figure 1, included examining certain corporate disclosures that U.S. Technologies filed with SEC, such as the most recent annual and quarterly filings. Early on the morning of October 25, staff became aware that the company had disclosed in a 2001 filing that it dismissed its external auditor in August 2001, a month after the auditor informed the company of material internal control weaknesses. Upon learning about the change in auditor, staff in the Office of the Chief Accountant did not contact Judge Webster to obtain additional information on this issue, nor did they contact other audit committee members, the company, the current or former external auditor, or the SEC General Counsel. Similar to his initial determination of October 15, the Chief Accountant evaluated the additional information that had been collected, including information on U.S. Technologies’s change in external auditor and determined that, in his view and his staff’s view, nothing had come to light that affected the suitability of Judge Webster to serve as PCAOB chairman. The Chief Accountant told us that his decision was based on his review of financial disclosure documents filed with SEC; his experience as an auditor; the stature and reputation of Judge Webster, who had been confirmed by the Senate five times; and his knowledge that additional vetting would occur post-appointment. The Chief Accountant also said the documents filed with SEC by U.S. Technologies, which were determined to be late and reported internal control weaknesses, described problems that were not unusual in small, rapid-growth companies. He said that such companies often outgrow their existing financial and accounting systems and the capacity of their chief financial officers. Moreover, he was persuaded by the fact that U.S. Technologies’s auditor had ultimately given the company a clean opinion. Having decided that U.S. Technologies posed no problems with regard to Judge Webster’s nomination, the Chief Accountant did not believe that he needed to share this information with the SEC Chairman or the other Commissioners. The Chief Accountant said that he had made a similar judgment about Mr. Biggs who had been on the audit committee of the board of directors of McDonnell-Douglas when it entered into a consent decree with SEC regarding issues involving accounting irregularities several years ago. According to the SEC Chairman, he knew that Judge Webster was the former chairman of the audit committee of the board of directors of U.S. Technologies before the October 25 vote. He said that he learned after the vote that law enforcement authorities were investigating the Chief Executive Officer of U.S. Technologies. Specifically, Judge Webster told us that he telephoned the SEC Chairman on October 28 to inform him that he had learned during the weekend of October 26 and 27 that law enforcement was investigating U.S. Technologies’s Chief Executive Officer. Further, the SEC Chairman and the other Commissioners told us that they learned for the first time of reported “allegations of fraud” against U.S. Technologies and that the company had dismissed its external auditor following an audit that uncovered material internal control weaknesses from a reporter’s inquiry on October 30 or from the October 31 newspaper article. This disclosure prompted the SEC Chairman to ask the General Counsel to investigate Judge Webster’s role in these matters. The Office of the General Counsel has subsequently suspended the investigation due to Judge Webster’s resignation. Vetting candidates is a vital component of the appointment process. The General Counsel was asked to vet the appointees, and according to the SEC Chairman, he expected that some vetting and background checks would be performed by the General Counsel on candidates throughout the process. Although the Chairman said that he knew that some vetting would occur post-appointment, he was surprised at how little was done before the vote. We found that after the selection process broke down in early October and Commissioners began to interview a larger number of candidates than staff originally planned, the General Counsel met with many but not all of the potential candidates who were interviewed. Specifically, the General Counsel told us that prior to October 25, he had met with 17 of the roughly 25 candidates who were interviewed. However, the Office of the Chief Accountant did not schedule meetings for him with three of the five individuals who were ultimately appointed to the PCAOB. The General Counsel said that he understood that his role in interviews with candidates beginning in early October was to address questions regarding service on the PCAOB, such as pay, location, ethics restrictions, and other matters important to attracting quality candidates. He also informed candidates that they would be required to submit questionnaires and be subject to a background check. The General Counsel said that he expected that the final slate would be subject to additional interviews and vetting. As a result, the candidates with whom he met were not systematically queried about current or previous membership on boards of companies nor were they subject to the other planned elements of the vetting process. For example, we found that SEC also did not systematically use its available internal technological capabilities and resources to the fullest extent possible to begin to collect fundamental information on the applicants being interviewed as it had initially planned to do. Moreover, SEC staff did not consistently search internal databases such as the Name Relationship Search Index (NRSI) and the Electronic Data Gathering and Retrieval System (EDGAR), or periodical databases such as LexisNexis and Westlaw, for any information on potential candidates. Instead, if any candidate brought up an issue that might potentially affect his or her fitness to serve, the General Counsel would look into the matter. This occurred in at least two instances during the interview process. The General Counsel met with Judge Webster prior to his appointment, but there was no discussion of U.S. Technologies. The General Counsel said that he first learned of potential concerns about Judge Webster and U.S. Technologies from press inquiries in the days leading up to the October 31 newspaper article. Early in the selection process, there also was no clearly defined and agreed-upon method for vetting of candidates, and SEC staff considered various approaches to vetting the slate of five candidates. Initially, the Office of the General Counsel explored using the FBI to conduct background investigations into PCAOB appointees. However, because the PCAOB was not a federal government entity and the FBI was unlikely to be able to complete required investigations within SEC’s tight time frames, SEC staff decided that it would be more appropriate to hire an outside contractor to perform this role. The General Counsel agreed to develop an appointee questionnaire that then would be supplied to the outside contractor that would be performing the background checks. It was the General Counsel’s expectation throughout the process that background checks would be performed only for the five individuals actually nominated for the PCAOB. Early in the selection process, the SEC Chairman and staff believed that the selection process would be completed by the end of September and that at least some vetting of the appointees would be completed before the October 28 statutory deadline for the appointment of the PCAOB. However, by mid- October the slate had still not been agreed upon and the General Counsel had just hired a contractor to conduct background checks on the appointees. Therefore, it became clear that vetting of candidates could not be completed prior to appointment and the General Counsel concluded that it would be necessary to vet the PCAOB members post- appointment. Ultimately, the General Counsel did not know the final slate of names selected from among those who were interviewed until October 24, the night before the vote. As a result, insufficient time remained to properly vet the PCAOB members prior to their appointment, and the General Counsel and his staff were able to perform only a very limited inquiry into enforcement activities before the Commission’s vote. Although some indicated surprise, all of the Commissioners were informed before the vote that background checks had not been performed on the candidates and that the Office of the General Counsel planned to use a questionnaire and outside contractor to vet the appointees. At the October 25 Commission meeting, the Commissioners selected the first chairman and members of the PCAOB and authorized background checks. On November 1, 2002, the Office of the General Counsel formally notified the Commission of the specific steps that staff from the Office of the General Counsel had taken or planned to take to examine the background of each PCAOB member. The Office of the General Counsel also provided the Commissioners with a copy of the questionnaire, which was based on the federal “Questionnaire for National Security Positions” and the “Statement for Completion by Presidential Nominees,” that each PCAOB member was asked to complete. This questionnaire was sent to the PCAOB appointees on November 6, and all documents were completed and returned by November 13. A supplemental questionnaire was sent to the PCAOB appointees on November 14, and all documents were completed and returned to SEC by November 20. Also on November 1, the Office of the General Counsel provided additional information on the role of CRM Consulting, the private contractor hired to verify the information on the questionnaires. At the time of our review, CRM Consulting was reviewing the appointees’ completed questionnaires and expected to complete its review by the end of the year. SEC staff are to review the information provided by CRM Consulting and look into certain issues, such as outstanding or anticipated lawsuits, administrative proceedings against the member, legal judgments, pending civil or criminal inquiries involving the member in any way, investigations or sanctions of the members by professional associations, financial obligations that might affect a member’s service, potential conflicts of interest, and other matters that if they became publicly known could subject the Commission or the PCAOB to embarrassment or disrespect. In addition to staff from the Office of the General Counsel, staff from the Division of Enforcement will be involved in this process. The Office of the General Counsel had planned to include a review of Judge Webster’s activities involving U.S. Technologies and other relevant matters. However, this review was suspended following Judge Webster’s resignation on November 12. The Office of the General Counsel described the staff review of the background checks as limited; SEC plans to rely primarily on the contractor’s check. The staff review also will not involve an assessment of the sufficiency of the member’s education, professional competence, or experience to serve. The review process was still ongoing at the time of our review. Although SEC lacked a documented and formalized selection and vetting process for nominees, several factors contributed to the eventual breakdown of the Commission’s ability to select a slate of nominees that could be unanimously appointed. First, lacking formalized and tested procedures familiar to SEC staff and the Commissioners, the SEC Chairman did not reach consensus with the other Commissioners about the process; therefore, the Commission was unable to provide clear direction to staff. Second, the Commission neither agreed upon nor articulated formal selection criteria beyond the general criteria provided by the act. Finally, the lack of pre-appointment background checks and vetting exposed SEC to risks. Perhaps the biggest impediment to the smooth functioning of the selection process was a lack of initial consensus among the Commissioners and key SEC staff on the selection process. As previously mentioned, the Chairman initially decided that staff, primarily the Chief Accountant, who would have most contact with the PCAOB, would drive the effort. Although we found some evidence that staff from the Office of the Chief Accountant and General Counsel had informal meetings about the selection and vetting process in August, the process was not formalized and continued to change over time. SEC did not find viable solutions to deal with Sunshine Act constraints nor did the staff formalize a selection process and submit a plan to the Commission for its approval. One option would have been to hold an open meeting to discuss and agree to a process. Another option, which was subject to constraints identified by the General Counsel, would have been to hold a closed meeting. However, some of the Commissioners did not want to hold an open meeting because of privacy concerns about potential nominees and public scrutiny. Others were concerned that a closed meeting would have raised questions about the transparency of the process. A third option, which was suggested by at least one of the Commissioners, would have been to formalize the process in writing, circulate it, and reach agreement among the Commissioners. Reaching a consensus through a process meeting early in the process on the roles, responsibilities, and duties of the Commission and SEC staff would have helped to provide direction and focus on how the selection process could best be accomplished. Without the benefit of an organizational meeting to share their thoughts and perspectives, it was difficult for the Commissioners to discuss the process and how they thought it should work. Instead, the Chief Accountant usually provided information to the SEC Chairman or his staff, and that information was expected to be relayed to the other Commissioners through weekly meetings between the SEC Chairman and each Commissioner. The SEC Chairman said that he also thought the Chief Accountant was meeting with the other Commissioners, but the Chief Accountant said that he relied to a great extent on the SEC Chairman’s weekly meeting with each Commissioner to keep them apprised. As a result, the Chief Accountant and the Chairman acted as intermediaries in keeping the Commissioners involved in the process. The lack of early consensus and approval of the process by the Commission continued to affect the selection process. For example, some of the Commissioners complained that they were not sure about what was occurring and that they did not want to receive a final slate of names without being able to independently query candidates. As Commissioners raised concerns, the SEC Chairman and the Chief Accountant would adjust the process to accommodate the input provided. For example, early in the selection process, one Commissioner suggested that SEC focus on selecting a chairman and build the rest of the membership around that person. Another Commissioner, unhappy with the lack of a process and the apparent lack of progress, began arranging meetings with candidates on his own. However, he did not initially include the Chairman, which created a problem. Both concerns listed above led to adjustments and expanded the selection process. The Commissioners ultimately were not able to reach agreement on an individual to serve as the chairman, and each of the Commissioners interviewed more candidates than originally planned. The lack of an articulated, agreed-upon process also eroded communications as the deadline drew closer. The evening before the October 25 vote, only three of the five Commissioners were provided with a draft of the names of the final slate. “among prominent individuals of integrity and reputation who have a demonstrated commitment to the interests of investors and the public, and an understanding of the responsibilities for and nature of the financial disclosures required of issuers under the securities laws and the obligations of accountants with respect to the preparation and issuance of audit reports with respect to such disclosures.” However, these criteria alone made it difficult to narrow the list of nominees and applicants. We considered the process followed by other entities that appoint boards, nominate agency heads, or fill staff positions. Generally, the process includes some sort of selection criteria. For example, the Financial Accounting Foundation (FAF), which appoints members to serve on the Financial Accounting Standards Board (FASB) as terms expire, has specific selection criteria for board membership. In addition to knowledge of financial accounting and reporting and an awareness of the financial reporting environment, FAF’s selection criteria include other skills such as critical thinking, communication, and interpersonal skills. Likewise, SEC has a similar process for hiring staff for senior-level positions. Although the task faced by SEC was unique in some respects, there are valid comparisons that can be made; SEC staff also indicated that they considered the FAF approach among others when framing its selection and vetting process. However, we found no evidence that any additional selection criteria were identified, documented, and applied consistently among the candidates. Nor was consistent and sufficient information collected that would have allowed staff and the Commissioners to apply such criteria as considered appropriate. In keeping with the notion of augmenting the act’s selection criteria, the Chief Accountant said that his goal was to create a “balanced” board, which he defined as a diverse board representing a variety of constituencies and ideologies. He also stressed that he and the Commissioners sought a racially and gender-diverse membership. Given that the act required that two of the PCAOB’s five members be CPAs, the Chief Accountant wanted to ensure that the other members had skills needed to establish a new organization. The Chief Accountant said that he generally categorized nominees or applicants into broad groups, including investor advocates, former chief executive officers or business executives, attorneys, politicians, academics, regulators, and accountants. This approach was also consistent with the FAF model for FASB, which is balanced among academics, investors, industry representatives, and CPAs. However, unlike FAF, this approach does not appear to have been well- articulated or communicated to the Commission or the public, nor does it appear that all members of the Commission ever endorsed the Chief Accountant’s balanced board approach. One Commissioner wanted the board to have a strong law enforcement orientation because of the PCAOB’s mandate to enforce its regulations and standards. Yet another Commissioner wanted the board to include a majority of “reformers,” reflecting what he considered was the purpose of the act. At times, some Commissioners believed that balance also involved political party affiliation. The lack of agreement and open dialogue about these issues hampered the Commission’s ability to reach a consensus and eventually contributed to the ineffectiveness of the process. Participants in the process also believed that it was complicated by the involvement of a wide range of external parties and media scrutiny. As a result, the split Commission vote on the PCAOB, most notably the vote on the chairman, raised speculation about the integrity of the process. As previously mentioned, after the Commission was unable to appoint a consensus candidate for PCAOB chairman by the end of September, the Office of the General Counsel was forced to vet the final slate post- appointment. Although the Chairman had tasked the General Counsel with vetting the PCAOB appointees, the Commission and staff did not discuss or reach agreement on the role to be played by the General Counsel in the interview process after early October when the process changed. One method of vetting would have been for the General Counsel, in addition to serving as a resource to candidates, to use a uniform list of questions to ask of potential candidates before the Commissioners interviewed them.Similarly, staff in the Office of the Chief Accountant had developed a list of interview questions that apparently was not used during the interviews, but which would have allowed the interviewers to solicit consistent information from candidates. The General Counsel said that he was not consistently scheduled for interviews with candidates and that he did not see the final slate of candidates until the evening before the vote. Therefore, the General Counsel could not elicit background information, adequately utilize existing SEC databases, or access other publicly available sources to conduct a minimum level of due diligence on potential members’ board memberships, affiliations, conflicts of interests, litigation, or other activities that might raise actual or apparent conflicts of interest or raise issues that could hamper the effectiveness of the PCAOB or embarrass the Commission. Due to the process and communication breakdown, the Office of the Chief Accountant in connection with the Office of the General Counsel, the Division of Enforcement, and the Division of Corporation Finance did not explore all internal sources of information early enough or fully enough to ensure that no conflicts existed. The Office of the General Counsel also was not able to review other publicly available sources of information in a timely manner. For example, the Office of the General Counsel could have learned about pending litigation through sources such as Westlaw and LexisNexis. The absence of vetting, while made known to the Commission prior to the vote, may have prevented the Commission from making a fully informed vote about the candidates. Given the short time frame to appoint members and the lack of an existing formalized process, the PCAOB selection process was a difficult undertaking for SEC. Based on our reviews of various correspondence and extensive interviews with the principals involved, it is clear that the Commissioners never collectively discussed establishing a process nor reached consensus on how best to proceed in selecting members for the PCAOB. This lack of consensus was evidenced by a fundamental disagreement about whether the Commissioners should have played a lead role in identifying potential PCAOB candidates or whether the process should have been staff-driven as envisioned by the Chairman. Although Sunshine Act requirements may have made it more difficult for the Commission to reach this much needed consensus, SEC did not identify effective alternative methods for ensuring that the views of all the Commissioners were reflected in the process. As a result, the process changed and evolved over time and was neither consistent nor effective. Although the Commission was informed that background checks and vetting had not occurred before the vote on October 25, the Chairman and Commissioners generally believed that the Office of the General Counsel and/or the Office of the Chief Accountant was undertaking some type of vetting of candidates throughout the process. Given the highly scrutinized, political nature of the appointment process, any decisions had to be able to withstand intense public scrutiny and, hence, the lack of vetting proved to be a significant flaw in the selection process. Based on our reviews of thousands of pieces of correspondence and comprehensive interviews, we found no evidence that the SEC Chairman knew anything before the October 25 vote other than that Judge Webster had once been chairman of the audit committee of the board of directors of U.S. Technologies, a company on the brink of failure. This information, which the SEC Chairman heard from Judge Webster on October 15, was not detailed and did not raise a major concern at that time, and prior to the vote, the Chairman’s Chief of Staff had told the Chairman that Judge Webster’s involvement in U.S. Technologies was not a problem. However, in making this conclusion, insufficient due diligence was performed by the Office of the Chief Accountant. In addition, the Chief Accountant’s failure to communicate any information to the General Counsel, who had responsibility for the vetting process, could have contributed to this incomplete assessment. When staff in the Office of the Chief Accountant conducted further analysis into U.S. Technologies on October 25, they became aware that the company’s 2001 filings disclosed that the company had dismissed its external auditor a month after that external auditor reported material internal control weaknesses related to the company’s accounting and financial reporting infrastructure resulting from the lack of an experienced chief financial officer. Based on the factors previously discussed including his experience as an auditor, his knowledge of Judge Webster’s long and prominent record of public service, and an understanding that additional vetting would take place post-appointment, the Chief Accountant concluded that this matter did not raise a concern and decided that it was not necessary to inform the Chairman, the other Commissioners, or the Office of the General Counsel of these issues. In light of the current environment surrounding auditors, the role played by audit committees of boards of directors of publicly held companies and the expectation that new members of the PCAOB be beyond reproach, it is clear from our review of the relevant documents that these matters, especially when viewed in the current environment, should have prompted SEC to perform additional, in-depth evaluation before reaching a conclusion about U.S. Technologies and Judge Webster’s involvement. Further, in our view, the information concerning Judge Webster’s role as chairman of the audit committee of the board of directors of a company that had dismissed its external auditor after the auditor had found material internal control weaknesses should have been shared by the Chief Accountant with the SEC Chairman and other Commissioners prior to the vote. SEC was under enormous pressure in selecting the PCAOB members and had little time to do so. SEC also had difficulty getting certain outstanding individuals to agree to be PCAOB members because of the full-time service requirement and the need for members to give up certain forms of income and other professional or business activities. However, going forward, the Commission will be tasked with establishing a more credible process to replace individual PCAOB members, starting first with selecting a replacement for the chairman and then conducting annual staggered reappointments. Much can be done to improve the selection and vetting process. Before any additional members are appointed to the PCAOB, especially the chairman, we recommend that the Commission reach agreement and document the process to be followed, the sequence and timing of key steps, and the roles to be played by the Commission and the staff in the selection and vetting of candidates; develop agreed-upon, detailed selection criteria for PCAOB members and the chairman that fully embrace the principles articulated in the Sarbanes- Oxley Act of 2002; develop a vetting process that ensures that before an applicant is brought to the Commission for serious consideration, certain minimum background and reference checks are performed to ensure that the individual has no potential legal or ethical impairments and ensure that the vetting process is completed before the Commission votes to appoint members to the PCAOB; and determine what candidate information should be documented, analyzed, and shared among the Commission and staff. Moreover, we recommend that the SEC Chairman, direct staff involved in the PCAOB selection process to make greater use of available technology to conduct necessary background checks and to generate sufficient details on the qualifications of potential applicants so that the Commission can make informed decisions on the fitness of potential applicants to be PCAOB members. We requested comments on a draft of this report from SEC Commissioners, SEC’s General Counsel, SEC’s former Chief Accountant, the Chairman’s former Chief of Staff, and others at SEC involved in the selection and vetting process. In addition, we requested comments from Judge William H. Webster and John H. Biggs. Each of these parties provided only technical comments on the report’s contents, which were incorporated as appropriate. The Chairman and each of the Commissioners also told us that they generally agreed with the report’s recommendations. We will send copies of this report to the Ranking Member of the Senate Committee on Banking, Housing, and Urban Affairs; the Chairman of the House Committee on Energy and Commerce; the Chairman of the House Committee on Financial Services; and other interested congressional committees. We also will send copies to the Chairman and Commissioners of the SEC, the former Chief Accountant, Judge Webster, Mr. Biggs, and others upon request. In addition, this report is also available on GAO’s Web site at no charge at http://www.gao.gov. If you or your staff have any questions concerning this letter, please contact Orice M. Williams or me at (202) 512-8678. Toayoa Aldridge, Wesley Phillips, Derald Seid, David Tarosky, and Barbara Roesmann made key contributions to this report. In addition, Robert Cramer of our Office of Special Investigations and Nelson Egbert and Mary Beth Sullivan of the SEC Office of the Inspector General made key contributions to this report. Event Sarbanes-Oxley Act of 2002 is signed into law, requiring the Securities and Exchange Commission (SEC) to appoint a five- member Public Company Accounting Oversight Board (PCAOB) within 90 days. Chairman delegates responsibility for identifying candidates for the PCAOB to the SEC Chief Accountant. The General Counsel was tasked with vetting candidates. The legislative deadline to appoint the board is October 28, but SEC staff plan to complete the process by September 30. SEC issues a release calling for nominations and applications for the board to be submitted by September 2. SEC also begins to directly solicit names of potential candidates from various stakeholders.Paul A. Volcker emerges as the consensus choice for PCAOB chairman. SEC Chairman has initial meeting with the Secretary of the Treasury and the Chairman of the Board of Governors of the Federal Reserve System to discuss candidates and obtain input. List of PCAOB candidates with 210 names is distributed to the Commissioners. Chief Counsel in the Office of the Chief Accountant suggests that a meeting be scheduled to discuss the mechanics of selecting the members of PCAOB. Office of the General Counsel formalizes procedures for vetting candidates. Chief Accountant recommends that SEC engage a private firm to conduct background investigations of PCAOB candidates. Updated list of candidates with 325 names is distributed to the Commissioners. SEC cutoff date for receipt of nominations and applications. SEC Chairman, one Democratic Commissioner, and the Chief Accountant meet with Mr. Volcker in New York City to discuss the PCAOB chairmanship position. Mr. Volcker agrees to inform SEC Chairman of his decision by September 5. SEC learns that Mr. Volcker will not accept the chairmanship. SEC Chairman, a Democratic Commissioner, and the Chief Accountant meet with John H. Biggs in New York City to discuss the PCAOB chairmanship. Charles Niemeier, PCAOB appointee, receives call from the Office of the Chief Accountant requesting a copy of his résumé. Kayla Gillan, PCAOB appointee, receives call from the Office of the Chief Accountant to schedule a telephone conference with the Chief Accountant. SEC Chief Accountant interviews Ms. Gillan by telephone. Willis Gradison, PCAOB appointee, receives call from the Office of the Chief Accountant requesting a copy of his credentials. Mr. Biggs meets in New York with third Commissioner. Event Ms. Gillan meets in New York with one Commissioner. Ms. Gillan and Mr. Biggs meet in New York City at the suggestion of a Commissioner. One Commissioner schedules interviews with fellow Commissioners for Ms. Gillan and another candidate. The Chairman was not included. Late September Office of the Chief Accountant begins scheduling interviews of PCAOB candidates with the Commissioners. Mr. Gradison receives a call from the Chief Accountant to come in for interviews. Mr. Niemeier is also asked to come in for interviews. For the Chairman, Mr. Biggs’s appointment is contingent upon another specific individual being appointed to the PCAOB. One of the other Commissioners informs the Chairman that a Commissioner may not be willing to support that individual. SEC Chairman calls William H. Webster to ask him to consider taking a position on the new PCAOB. Mr. Biggs has meetings with two Commissioners at SEC headquarters. Two Commissioners inform the SEC Chairman that three of the Commissioners are unclear as to what is the PCAOB selection process, which they urge the Chairman to articulate. The two Commissioners suggest that all five Commissioners have a meeting about the selection process and “discuss where we are and where we are going.” Daniel Goelzer, PCAOB appointee, receives a call from the Chief Accountant requesting a copy of his rØsumØ. Article in The New York Times reports that Mr. Biggs had “agreed to be the first head” of PCAOB. Ms. Gillan meets with three Commissioners at SEC headquarters. Mr. Biggs and the Chairman of the Board of Governors of the Federal Reserve System speak by telephone about the PCAOB. SEC Chairman calls Mr. Biggs to inform him that the prior day’s newspaper article had created consternation at SEC and “complicated things” regarding the consideration of Mr. Biggs for chairman of PCAOB. Mr. Goelzer meets with the SEC Chairman and the Chief Accountant. Mr. Gradison has meetings with three Commissioners at SEC headquarters. Mr. Goelzer has meetings with remaining four Commissioners at SEC headquarters. Mr. Gradison has meetings with the Chief Accountant, one Commissioner, and the SEC Chairman. Event Article in The New York Times alleges that the SEC Chairman was “backing away from” Mr. Biggs as PCAOB chairman. For the SEC Chairman, this raises questions about Mr. Biggs’s independence. Through the issuance of a Statement of Work and Request for Quotations (RFQ), SEC solicits bidders to conduct background investigations of prospective PCAOB members. The RFQ requests that quotations be furnished by October 8. Mr. Niemeier meets individually with Commissioners at SEC headquarters. Ms. Gillan has telephone interview with the SEC Chairman. Judge Webster meets at SEC headquarters with the SEC Chairman, the Chief Accountant, and the Chairman’s Chief of Staff, who together explain to Judge Webster the value he could bring to the PCAOB. During this meeting, Judge Webster mentions that he was the former chairman of the audit committee of the board of directors of U.S. Technologies, Inc. At the request of staff in the Office of the Chief Accountant, staff in SEC’s Division of Enforcement searches the Name Relationship Search Index database for information on U.S. Technologies. Around this time, the Office of the General Counsel realized that vetting would have to be completed post-appointment. The Office of the Chief Accountant and the Office of the General Counsel file an Order for Supplies or Services to have Contract Resource Management, Inc. (CRM), conduct background investigations on PCAOB appointees. The order is for five to eight background investigations. During a meeting with the Federal Reserve Chairman concerning a supervisory matter, the SEC Chairman shared several names of potential nominees for the PCAOB. Judge Webster meets with the SEC Chairman, the Chief Accountant, and the Chairman’s Chief of Staff at SEC headquarters a second time to again discuss the possibility of Judge Webster serving on PCAOB. SEC Chairman meets with Mr. Niemeier. Judge Webster meets with remaining Commissioners in person or by telephone. Judge Webster meets separately with the SEC General Counsel to discuss the full-time nature of PCAOB service and what that would entail. Event A Commissioner requests that the SEC Chairman schedule an open meeting for the PCAOB vote. The Chairman schedules the open meeting for 2 p.m. on October 25. Judge Webster agrees in the evening to be PCAOB chairman. Chief Accountant leaves messages for Mr. Gradison and Mr. Goelzer regarding “exciting news.” Mr. Gradison returns the call the following morning, and the Chief Accountant informs him of his nomination. Chief Accountant and the SEC Chairman recommend terms of office for board members. Chief Accountant asks member of his staff to look again into the U.S. Technologies filings and enforcement actions. Chief Accountant informs Ms. Gillan of her nomination to the board. Also, the SEC General Counsel does a verbal background check over the telephone. SEC Chairman calls the Chairman of the Board of Governors of the Federal Reserve System and informs him that Judge Webster has accepted the PCAOB chairmanship. Later that day, the SEC Chairman provides the Chairman of the Board of Governors of the Federal Reserve System and the Secretary of the Treasury with the names of the four other nominees and asks them for letters endorsing SEC’s selections to the PCAOB. Office of the General Counsel completes search of Name Relationship Search Index database for information on 17 finalists for the PCAOB. Staff from the Office of the Chief Accountant provides the Chief Accountant with an overview of information collected on U.S. Technologies. Mr. Niemeier finds out that he has been selected for the PCAOB the morning of the vote. Commissioners see final slate and are formally told that vetting will occur post-appointment. Also, PCAOB nominees find out for the first time the names of their fellow board members. Judge Webster, Ms. Gillan, Mr. Goelzer, Mr. Gradison, and Mr. Niemeier are appointed to the PCAOB at the SEC open meeting. Judge Webster telephones the SEC Chairman to inform him that law enforcement officers had seized equipment and records at U.S. Technologies’s offices. Event SEC press office receives an inquiry from The New York Times seeking comment on the content of an article it plans to print the following day concerning a criminal fraud investigation at U.S. Technologies. Office of the Chief Accountant looks into whether the SEC Division of Corporation Finance reviewed the U.S. Technologies’s Form 8-K and Form 8-K/A filings. Article in The New York Times alleges that Judge Webster provided the SEC Chairman with detailed information about his role in U.S. Technologies when he met with him earlier in the month. SEC Chairman and at least one other Commissioner independently contact the SEC Office of the Inspector General to investigate these allegations. Commission also asked the SEC Office of the General Counsel to conduct an investigation into Judge Webster’s involvement with U.S. Technologies. SEC General Counsel outlines for the Commission the specific steps his staff are taking to examine the backgrounds of each PCAOB appointee. SEC Chairman resigns. SEC Office of the General Counsel sends out vetting questionnaires to PCAOB members with a deadline of November 15 for submission. Chief Accountant resigns. Judge Webster resigns as chairman of the PCAOB. PCAOB holds planning meeting that includes Judge Webster. Office of the General Counsel sends out supplemental questionnaires to PCAOB members with a deadline of November 20 for submissions. All questionnaires have been sent to CRM, the contractor SEC hired to conduct background investigations. Office of the General Counsel sends Ms. Gillan’s and Messrs. Niemeier’s, Gradison’s, and Goelzer’s supplemental questionnaires to CRM. Article in The Wall Street Journal reports on the role played by Arthur Levitt in supporting Mr. Biggs. PCAOB holds its second planning meeting that includes Judge Webster. CRM briefs the Office of the General Counsel on its preliminary findings and indicates that it will provide a more formal report on December 12. CRM is to provide a final report on the supplemental questionnaires to the Office of the General Counsel by this date. | The Sarbanes-Oxley Act of 2002 created, among other things, the Public Company Accounting Oversight Board (PCAOB) to oversee audits of public companies. A divided Securities and Exchange Commission (SEC) appointed the first PCAOB on October 25, 2002. Amid allegations that the SEC Chairman withheld relevant information from the other Commissioners concerning the suitability of the newly appointed PCAOB chairman, GAO was asked to examine SEC's selection process; determine whether the SEC Chairman withheld information from other Commissioners; determine what vetting of candidates took place; and identify what actions led to breakdowns in the process. SEC faced significant challenges in vetting and appointing five members to the newly created PCAOB within 90 days. The SEC Chairman, who had overall responsibility for the appointment process, initially, envisioned a process primarily driven by SEC staff. He asked the Chief Accountant to take the lead in selecting and the General Counsel in vetting PCAOB members. However, this approach was not fully understood or endorsed by the other Commissioners. The overall process that emerged was neither consistent nor effective and changed and evolved over time. Several factors contributed to the eventual breakdown of SEC's selection and vetting process, including the inability of the Commissioners to reach agreement on a formalized process that defined the roles to be played by the Commissioners and staff; insufficient communication between SEC staff and Commissioners; and the lack of articulated selection criteria beyond general criteria provided by the act. Finally, inability to choose a final slate of candidates until the eve of the Commission's vote resulted in the appointment of PCAOB members who had not been fully vetted. On the day of the October 25 vote, the Chief Accountant became aware of information concerning Judge William Webster, who was slated to be the chairman of the PCAOB, and his role as the former chairman of the audit committee of a small company--U.S. Technologies, Inc. However, based on his review of available information, his experience as an auditor, Judge Webster's prominence and reputation, and the fact that additional vetting would occur post-appointment, the Chief Accountant deemed that the information would not affect Judge Webster's nomination. He thus decided not to share the information concerning Judge Webster's role at U.S. Technologies with the SEC Chairman, the other Commissioners, or the General Counsel. As Judge Webster's appointment illustrates, the five individuals chosen for the PCAOB were not systematically vetted prior to appointment. After the selection process broke down in early October when the Commission was unable to agree on a consensus candidate for chairman, the General Counsel was forced to initiate the vetting process on a post-appointment basis, a fact which the Commission was made aware of before the October 25 vote. At the time of our review, the vetting process was still ongoing. |
Since 2002, the United States has allocated more than $72 billion for development, governance, and security in Afghanistan. See appendix II for a breakdown by year of U.S. allocation of funds for Afghanistan from fiscal year 2002 to fiscal year 2011. After almost a decade of donor-led efforts in Afghanistan, the United States and international donors have increased their focus on transitioning leadership to the Afghan government. The government of Afghanistan and the United Nations sponsored international conferences in London (January 2010) and Kabul (July 2010), which were attended by senior officials representing about 70 major donors, including the United States. Participants committed to supporting Afghan government leadership and ownership and agreed to increase the percentage of development aid delivered through the Afghan government to 50 percent by 2012, if the Afghan government showed progress in areas such as strengthening its public financial-management systems and reducing corruption. In November 2010, representatives from 48 countries contributing to the United Nations–mandated International Security Assistance Force agreed to a plan to begin transitioning security responsibilities to the Afghan government in 2011 and to complete this transition by 2014. In June 2011, the U.S. President reiterated that the process of security transition in Afghanistan will be complete by 2014. According to donors and the Afghan government, ensuring that public funds are used in a transparent and responsible manner is necessary for effective governance. Key elements of Afghanistan’s PFM include developing a national budget that represents the country’s priorities (budget formulation), spending the approved budget in the intended time frame (budget execution), and ensuring that funds are used as intended (through audits). In July 2010, the Afghan government published a plan, called the Public Financial Management Roadmap, to strengthen the Afghan government’s performance in three key areas at the national and provincial level: budget formulation, budget execution, and accountability and transparency of financial management. Additionally, a cross-cutting area is to increase the capacity of Afghan ministries. MOF led the development of the Roadmap, with input from the World Bank and International Monetary Fund, as well as partner governments such as the United States and United Kingdom. Table 1 provides an overview of the Roadmap’s components and their key areas. Additionally, appendix III provides information on the Afghan government’s budget process. U.S. embassy guidance issued since the release of the Roadmap states that U.S. governance activities are to increase focus on developing the capacity of Afghan civilian agencies in budget prioritization and execution to achieve progress towards Afghan self-governance in anticipation of the security transition. Several international donors are helping the Afghan government improve its PFM capacity. The World Bank, the United Kingdom, and the United States are key donors providing assistance to build PFM capacity of Afghan civilian agencies. Figure 1 shows donor PFM assistance to Afghan government entities at the national and provincial levels. Through the Combined Security Transition Command–Afghanistan (CSTC-A), DOD helps build the PFM capacity of Afghanistan’s security ministries, the Ministries of Defense (MOD) and Interior (MOI). CSTC-A is primarily U.S.-funded and staffed. In April 2009, the United States and its North Atlantic Treaty Organization (NATO) allies agreed to establish NATO Training Mission–Afghanistan (NTM-A) to oversee institutional training and development of the Afghan National Security Forces. NTM- A/CSTC-A now operates as an integrated NATO and U.S. command. International donors provide a significant share of the total funding for Afghanistan in the form of funds channeled through the Afghan government budget as well as “off-budget” assistance that does not use the Afghan government’s budget system. According to our preliminary analysis, the total estimated budget for Afghanistan for March 21, 2010– March 20, 2011 was $10.2 billion; donors were expected to fund about two-thirds of the Afghan government budget of $4.4 billion and the entire reported off-budget assistance of $5.8 billion. Figure 2 shows the total funding to Afghanistan through the Afghan government budget and off- budget donor assistance for March 2010 to March 2011. USAID, Treasury, and DOD support the Roadmap goals through various activities such as (1) USAID projects that provide technical assistance and training to Afghan civil servants, (2) Treasury advisers’ assistance to MOF, and (3) DOD’s mentoring and coaching assistance through CSTC- A to MOD and MOI. USAID provides training and technical assistance mainly through two contractor-implemented projects. Treasury provides technical assistance through 6 advisers in MOF, who work with senior officials on issues such as budget execution. Through CSTC-A, DOD has 22 advisers at MOD and MOI, who advise officials on developing their budgets and strengthening the payroll system to improve accuracy. USAID is funding several projects that provide training, mentoring and coaching, and technical assistance that address Roadmap goals. USAID has two primary PFM capacity-building projects—the Economic Growth and Governance Initiative (EGGI) project that has a contract value of approximately $92 million over 5 years, and the Afghanistan Civil Service Support (ACSS) project that has a contract value of approximately $84 million over 1 year. Both these projects are implemented by Deloitte Consulting, which has hired international contractors and local Afghans. EGGI supports all four PFM goals of strengthening budget formulation, improving budget execution, increasing accountability and transparency, and improving ministry capacity. For example, according to USAID, EGGI contractors provided program-budget technical assistance to 37 Afghan central ministries and agencies so that these entities can formulate annual budgets that conform to MOF guidance. Examples of EGGI’s technical assistance include contractors helping MOF’s fiscal policy unit develop a fiscal forecasting model and providing policy assistance to the Ministers of Finance and Economy. ACSS similarly supports most Roadmap goals through training it provides in financial management and procurement to Afghan civil servants in Kabul and in the provinces. Table 2 provides information on EGGI and ACSS, as well as other USAID projects that, while not exclusively focused on PFM capacity building, provide some PFM capacity-building assistance. A July 2009 Treasury assessment, which was part of a proposal for enhanced civilian assistance to Afghanistan, identified the need for advisers to assist with public expenditure management in MOF’s offices of treasury, budget, and internal audit. Treasury assigned six advisers from its Office of Technical Assistance to assist various MOF offices. These advisers are embedded in MOF offices and provide assistance that supports Roadmap goals by advising senior Afghan MOF officials, analyzing and monitoring budget expenditures and controls, and developing accounting and administrative manuals. Table 3 indicates location of Treasury advisers within MOF, type of assistance provided, and the PFM goals support by this assistance. As of April 2011, DOD, through CSTC-A, and other coalition partners had 587 advisers and mentors working with their Afghan counterparts to build the capacity of MOD and MOI in about 30 functional areas, such as intelligence, personnel management, logistics, and finance and budget. Of the 587 advisers and mentors, CSTC-A had 7 advisers at MOD and 15 at MOI who provided advice in the finance and budget area. CSTC-A is working to ensure that MOD and MOI are capable of operating without coalition assistance by 2014. DOD employees or military personnel under CSTC-A provide leadership for capacity building in the finance and budget area; they are partnered with specific Afghan officials in MOI and MOD units and oversee the work of U.S. contractors. CSTC-A has developed comprehensive Ministerial Development Plans to guide its capacity-building efforts at MOI and MOD, including plans for finance and budget (see app. IV for a listing of all focus areas). Table 4 shows the specific goals in the finance and budget plans for MOD and MOI for 2011. According to CSTC-A officials, advisers at MOD focus on building Afghan capacity and generally do not carry out the work of MOD officials. At MOI, advisers meet with finance and budget partners daily to help formulate budget and develop pay procedures, such as the electronic payroll system. As shown in table 5, the advisers provide advice that supports the Roadmap goals of improving budget formulation and execution, and increasing accountability and transparency. According to DOD officials, the goal of their capacity-building efforts is to ensure that MOD and MOI are fully capable of carrying out key functions such as finance and budget without coalition assistance. According to the official terms of reference for CSTC-A advisers, they advise, facilitate, and collaborate with their MOI and MOD counterparts. The guidance states that on rare occasions, a crisis will occur and the adviser will have to perform the relevant functions. According to the CSTC-A officer directing finance capacity building at MOD, the advisers should allow their Afghan counterparts to make mistakes and learn from them, unless the mistake is critical. The overall results of U.S. efforts cannot be fully determined because (1) U.S. agencies providing PFM capacity assistance to the Afghan government have reported mixed results of their efforts, and (2) weaknesses in USAID’s performance management plans and frameworks, such as lack of performance targets and data, prevent reliable assessments of USAID’s results. USAID’s evaluations of its two primary PFM projects indicate that some activities were successfully completed, while others were terminated because their usefulness was questionable. Treasury advisers assessed that although their assistance at MOF had a positive effect, the results fell short of what they were trying to accomplish. Additionally, CSTC-A reported that while MOD has progressed to being able to perform critical financial-management functions with minimal coalition support, MOI continues to rely on coalition support for these functions. Moreover, CSTC-A has extended transition milestones for security ministries. Regarding USAID’s performance management, target and performance data have not been approved for PFM efforts at the USAID Mission level as well as for PFM-focused projects. USAID, Treasury, and DOD reported mixed results related to U.S. efforts to build Afghan PFM capacity. USAID has reported output and some outcome information for its EGGI and ACSS projects, and conducted project evaluations that indicate positive and negative results. In 2010, the EGGI project trained approximately 800 government employees on how to develop a program budget, which allows budget units to request funding based on expected outputs and outcomes of specific programs. In addition, EGGI provided technical assistance to 37 budget units to help them prepare annual budgets for Afghan fiscal year March 2011 to March 2012. According to USAID, this training was effective because all budget units prepared program budgets and submitted them in a timely manner. USAID also reported that 10 ministries “graduated” from budget training as these units can prepare their own budgets with little or no assistance from the EGGI team. According to USAID evaluators, program budgeting represented a major programmatic accomplishment for the EGGI project. However, other EGGI activities were not as successful, in part because USAID and the project’s implementing partner did not adequately consult with relevant stakeholders, such as other international donors conducting related activities. For example, EGGI developed a Revenue Reconciliation Database to report revenue collections to the Afghanistan Revenue Department in real time. According to USAID evaluators, the EGGI revenue database does not directly connect with the Afghan government’s accounting system, as it was intended to do, or the revenue-collection system being developed by DFID; therefore, the database does not provide information that is useful for planning or analysis. USAID has since terminated this activity. However, USAID officials noted that because of delays in the installation of the DFID system, the USAID database is the only functional system in use by MOF’s Afghanistan Revenue Department to collect and report revenue data. Additionally, USAID’s evaluators noted that USAID and DFID did not adequately coordinate efforts related to creating a medium taxpayer office in Herat. According to USAID, ACSS trained approximately 16,000 civil servants in Kabul (5,759 participants) and in 26 provinces (10,121 participants) from February 2010 to March 2011. It provided training in five core subjects, including financial management and procurement. According to evaluations by USAID and the Afghan government’s Independent Administrative Reform and Civil Service Commission, ACSS training contributed to improvements in civil servants’ performance. However, USAID evaluators noted that a lack of baseline data makes it impossible to measure the extent of these improvements. In addition, USAID’s evaluation of ACSS cites several challenges resulting from the project’s Afghan-led approach, which involved Afghan officials in making decisions related to project implementation. Some Afghan officials viewed the Afghan-led approach to mean that they did not need to ensure accountability for some inventory. As a result, in some cases, USAID inventory stickers were reportedly removed from assets before the items were transferred to Afghan control, causing accountability issues, according to USAID’s evaluation. In addition, some Afghan managers interpreted the concept of Afghan-led to mean that they could prevent the project’s monitoring and evaluation team from obtaining data needed to assess project results, according to USAID’s evaluation. For example, when ACSS staff requested monitoring and evaluation data, some Afghan managers responded that they were not required to provide data and that such requests represented an inappropriate desire to exert control over their operations. As a result, the quality of monitoring and evaluation of the project suffered. To assess the effect of Treasury advisers’ technical assistance to MOF, Treasury’s Office of Technical Assistance requires each adviser to submit monthly reports and an annual evaluation. For the period October 2009 to September 2010, Treasury advisers assessed that although their assistance related to budget and financial management at MOF had a positive effect, the results were less than what they were trying to accomplish. For instance, the advisers gave a low score for results related to their efforts to design management reports to improve communication of financial information and enhanced budget control. However, the advisers assigned a high rating for results related to developing training materials to support capacity building. The advisers also assessed the level of commitment and involvement of their Afghan counterparts as being above average, with room for improvement. CSTC-A has established a process to assess progress on key objectives and rate the capacity of MOD and MOI, in areas including finance and budget functions, on a quarterly basis. According to CSTC-A’s assessments, MOD has progressed from significant reliance on coalition support in 2008 to being capable of executing core functions related to finance and budget with minimal coalition support by 2011. MOI has progressed from not being able to accomplish finance and budget functions on its own in 2008 to being able to carry out core functions with significant coalition assistance by 2011. Figure 3 shows actual and projected ratings for MOD and MOI’s capability in finance and budget operations. Progress at MOI has been slow, in part because CSTC-A’s capacity-building efforts at MOI started in 2006, several years after its efforts began at MOD in 2002. CSTC-A has extended the time frames to meet interim and final capacity-building goals at MOD and MOI due to various challenges. See appendix IV for a description of CSTC-A’s capacity assessment process. According to CSTC-A assessments, MOD transitioned from needing significant coalition support to accomplish finance and budget functions to requiring some coalition assistance in December 2008. Although CSTC- A’s rating of MOD has not changed since then, CSTC-A officials documented progress at MOD. For example, CSTC-A’s quarterly assessment of MOD for January through March 2011 reported that MOD executed critical functions, such as paying of salaries to soldiers, with minimum coalition support; implemented a policy to transition pay operations from U.S. embedded teams to Afghan army finance officers; streamlined pay systems and expanded electronic funds transfer capability so that salaries of 97 percent of Afghan soldiers are deposited directly into their bank accounts; executed its budget for salaries and other operational expenditures at over 99 percent for Afghan fiscal year 2010-2011; and implemented an integrated program budget-formulation process for this fiscal year, such that the budget request was tied to the ministry’s strategic goals and performance measures. However, MOD has not progressed to needing only coalition oversight in carrying out finance and budget functions because of a lack of sufficient delegation of budget authority from MOD’s central finance office to subordinate units and commands of the Afghan army. Although CSTC- A officials worked with MOD to clarify roles and responsibilities of variou offices, highly centralized budget authority prevented MOD from developing and executing the budget with input from the commands of the Afghan National Army. Therefore, CSTC-A advisers are still involved to ensure that MOD develops integrated program budgets. Although MOI progressed from an inability to accomplish its finance and budget functions in March 2008 to being able to accomplish these functions with significant coalition support by March 2009, MOI’s capacity is far behind MOD’s. CSTC-A officials stated that their capacity-building efforts at MOI started in 2006, several years after CSTC-A’s capacity- building efforts began at MOD in 2002. CSTC-A’s quarterly assessment of MOI for January through March 2011 rates MOI at the same level as in 2009, citing factors such as lack of consolidation of MOI’s personnel databases due to delays in contracts and lack of telecommunications network expansion. As a result, CSTC-A or Afghan officials are not certain that all salary payments are being made to legitimate Afghan National Police personnel rather than “ghost” employees. Additionally, current MOI finance office employees do not have formal training in properly executing the budget and salary functions. Moreover, according to CSTC-A assessment, MOI has a top-down organizational culture in which officials tend to delegate key decisions to the minister. For example, MOI’s Program Budget Advisory Committee, which is responsible for reviewing expenditures, is reluctant to make decisions affecting budget execution and pushes these up to the Minister of Interior. This reportedly resulted in necessary actions not being taken in a timely fashion. CSTC-A’s goal is for MOD and MOI to achieve self-sustainability prior to 2014, when coalition forces are scheduled to transition security responsibilities to the Afghan government. However, CSTC-A has extended the time frame for MOD and MOI finance offices to meet interim and final goals due to various challenges. For example, in early 2010, MOD’s finance office was expected to reach the interim goals of operating autonomously with only coalition oversight by March 2011 and the final goal of autonomous operations by January 2012. However, in March 2011 the projected date for meeting the interim goal was extended by 3 months to June 2011 and for meeting the final goal was extended by 6 months to June 2012. Similarly, we have previously reported that in several instances DOD pushed out completion dates related to training of Afghan army and police forces. The time frames for MOI to reach interim and final goals have also been revised. For example, in early 2010, CSTC-A expected MOI’s finance office to operate with some coalition assistance by March 2011 and become fully autonomous by March 2012. However, in March 2011, the time frames were revised so that the interim and final goals were expected to take 8 additional months each, and projected to be accomplished by November 2011 and November 2012, respectively. The delay was attributed to problems in implementing the electronic payroll system. Despite these delays, CSTC-A officials noted that they expect MOI to become fully autonomous by identifying and addressing high-risk areas, such as electronic payroll. USAID’s Automated Directives System (ADS) establishes performance management and evaluation procedures USAID is expected to follow with respect to planning, monitoring, and evaluating its programs. While USAID has noted that Afghanistan is an insecure environment in which to implement its programs, the agency has generally maintained the same performance management and evaluation procedures as it does in other countries in which it operates. For PFM capacity building in Afghanistan, we found a lack of compliance with USAID guidance at the mission level and at the implementing-partner level. Additionally, USAID evaluations also note weaknesses in the performance management of PFM projects. Appendix V presents a summary of the planning, monitoring, and evaluating requirements that make up USAID’s performance management and evaluation procedures. At the mission level, ADS requires USAID officials to complete a Mission performance management plan for each of its high-level objectives as a tool to manage its performance management and evaluation procedures. The guidance also requires that USAID establish performance targets for each of the indicators. We previously reported in July 2010 that USAID has operated without a required Mission performance management plan for Afghanistan since the end of 2008. Subsequently, USAID issued a new performance management plan that realigned objectives based on U.S. strategies adopted in 2009 as well as agreements made at the London Conference in January 2010 and the Kabul Conference in July 2010. This plan represents the U.S. Mission’s tool to plan and manage the process of assessing and reporting progress towards assistance and foreign policy objectives in Afghanistan. As figure 4 shows, the plan contains a results framework that includes two PFM-related objectives and several related indicators, but lacks performance targets for each indicator as required. The plan noted that baselines and targets for each indicator would be established in the first and second quarters of fiscal year 2011. Additionally, according to the plan, many indicators do not have baseline data or targets because some indicators are either new or were in the previous plan but data were never collected for them. This is contrary to ADS, which requires that targets be established for each performance indicator. In addition, some indicators are for proposed activities and need to be finalized. For example, one of the indicators, “Percent increase in standardized Public Financial Management assessment scores,” relies on baseline data for Afghan civilian ministries that is not yet available. In April 2011, donors initiated joint assessments of 14 Afghan civilian ministries, which account for 90 percent of the development budget, to establish baselines of their PFM capacity. These assessments are due by 2012, with preliminary information only available for MOF. ADS documents USAID’s performance management and monitoring procedures. Project implementers must follow these requirements outlined in USAID award documents. For example, project implementers are required to identify performance indicators, define their project’s “starting point” by establishing baselines, and define changes that signal success by establishing performance targets for each project year. In addition, project implementers are required to regularly collect, analyze, and interpret performance data in order to improve their ability to make project adjustments in a timely manner. The performance management frameworks for PFM capacity-building projects did not meet USAID guidance because of deficiencies such as a lack of baselines, targets, and performance data. The performance indicators for EGGI and ACSS related to PFM capacity- building activities do not consistently provide baselines, performance targets, or actual performance data for each indicator, as required. For example, as shown in figure 5, for fiscal year 2010, EGGI’s implementing partner did not establish performance targets or provide actual data on a quarterly or annual basis for training activities conducted by EGGI. Additionally, EGGI’s implementing partner did not report target or actual data for the indicator called “Annual Tax Revenues Collected in Priority Revenue Mustofiats” because MOF did not release revenue data during the project’s performance year. Subsequently, USAID provided fiscal year 2010 quarterly and annual data for this indicator to GAO in the fourth quarter of fiscal year 2011. While these data may provide information that is useful for project evaluation, since they were provided retroactively, they were unavailable for USAID to monitor the progress of the activity during the performance year. Similarly, ACSS’s implementing partner did not provide evidence of baseline, quarterly performance targets, or actual quarterly data for any of the project’s PFM training indicators. This lack of targets and actual performance data rendered associated indicators ineffective for either tracking the progress of associated activities or assessing the extent to which USAID’s implementing partners met interim goals. Appendix VI provides the available data for fiscal years 2010 and 2011 for EGGI and ACSS. For fiscal year 2011, although USAID-funded evaluations note that EGGI improved its performance measures and data collection, our analysis indicates that both EGGI and ACSS did not establish performance targets for each indicator, or report actual data for completed quarters, as required. As shown in figure 6, EGGI did not establish quarterly or annual targets for its training activities that include key areas, such as program budgeting and tax administration, noted in EGGI’s workplan. Similarly, ACSS did not establish baselines for any of its PFM indicators, which would make it difficult to assess the project’s training accomplishments. In addition to the lack of compliance with USAID guidance noted above, USAID-funded evaluations of EGGI and ACSS have noted other weaknesses in the projects’ performance management frameworks. For example, the evaluator of the EGGI project was critical of the large number of performance indicators for the project’s first year that do not demonstrate a direct link between project efforts and improved capacity of the Afghan government. More specifically, the evaluator noted that only 4 of 7 indicators related to EGGI’s PFM capacity-building component are directly affected by the project’s work. The other indicators—national domestic revenues, national tax revenues, and national nontax revenues—are indirectly affected by the project’s work. Another EGGI evaluation noted that since MOF was not publishing data for the indicator related to annual taxes in priority provinces, the implementer should replace it with a different indicator that can be used to monitor activities. The evaluator of the ACSS project noted that the project’s work plan, performance management plan, and operations manual were not sufficiently rigorous or comprehensive and, given the scope and scale of the project, the evaluator expected higher-quality project documentation. The evaluator also noted that the project’s performance monitoring framework lacks results-based information about the effectiveness of the project’s technical advisers. In addition to weaknesses in the performance management frameworks of USAID’s PFM capacity projects, we have previously noted similar deficiencies in other USAID projects in Afghanistan. For example: In July 2008, we reported that, among other things, limitations in USAID’s data collection and performance evaluation frameworks impeded the agency’s ability to evaluate the effects of its roads projects. In July 2010, we reported that USAID did not assure that all indicators had targets for the eight GAO-reviewed agriculture projects. In November 2010, we reported that four of the six implementers of GAO-reviewed water projects did not always establish targets for performance indicators. We are following up on USAID’s progress on these recommendations. Additionally, the USAID Administrator committed to tracking resources against outcomes as effectively as possible at a congressional hearing in July 2010. The United States and other international donors have begun to focus on building the Afghan government’s capacity for a successful transition of leadership for security and governance to Afghanistan. Improving the Afghan government’s ability to manage its public finances is an important part of this transition effort. USAID, Treasury, and DOD, along with other international donors, have undertaken various efforts to address the Afghan government’s PFM capacity. While DOD and Treasury have assessed and reported mixed results based on their efforts, the overall effect of U.S. efforts is not known because USAID, which has a key role in building Afghan civilian ministries’ PFM capacity, has not consistently established baselines and targets, or reported actual performance data. We have previously reported on deficiencies in USAID’s performance management efforts in Afghanistan and made recommendations to improve the assessment of USAID program performance and the efforts of USAID’s implementing partners. During a congressional hearing in July 2010, USAID’s Administrator identified defining, tracking, observing, and reporting on results of USAID projects as a priority, noting that it is important to determine how USAID efforts contribute to the U.S. strategy for Afghanistan. He also committed to tracking resources against outcomes as effectively as possible. The lack of approved mission-level and implementing partner-level performance targets calls into question USAID’s efforts to live up to the Administrator’s commitment. Given the importance of USAID efforts to improving Afghanistan’s PFM capacity and the need for reliable performance results data to base future development assistance and funding decisions on, it is vital that USAID take steps to ensure its performance management efforts are consistently implemented. We recommend that for public financial management (PFM) efforts, the USAID Administrator take the following three actions: (1) establish targets, as required, for each PFM-related performance indicator in its Mission Performance Management Plan for Afghanistan, (2) take steps to ensure that the USAID-approved performance management plan for each implementing partner includes baseline data and targets for each indicator, and (3) ensure that implementing partners report performance data at the frequency established in the performance management plan. We provided a draft of this report to DOD, State, Treasury, USAID, and the World Bank for comment and review. USAID provided written comments, which are reprinted in appendix VII, as well as technical comments, which we have incorporated as appropriate. DOD, State, Treasury, and the World Bank had no comments. USAID concurred with all three recommendations and noted that it has started taking steps to address these. Specifically, USAID stated that it had started a review of the Mission performance management plan to determine if adequate PFM-related performance indicators, including baseline data and targets, are included. Additionally, USAID noted that it has commenced a comprehensive review of all awards to USAID implementing partners working on projects and activities in the PFM sector to ensure USAID-approved PMPs are in place for each award and each implementing partner’s performance management plan includes sufficient baseline data and targets for each indicator. As part of this review, USAID noted that it is also examining the reporting requirements set forth in relevant awards, including performance requirements, and would take corrective action, as needed. We are sending copies of this report to the appropriate congressional committees, the Secretaries of Defense, State, and Treasury, as well as the Administrator of USAID, and other interested parties. The report also is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-7331 or johnsoncm@gao.gov. Contact points for our Offices of Public Affairs and Congressional Relations may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VIII. This report examines (1) U.S. efforts to improve the Afghan government’s public financial management (PFM) capacity, including the extent to which these efforts aligned with the PFM goals identified by the Afghan government and the international community; and (2) the extent to which U.S. efforts have helped to improve the Afghan government’s PFM capacity. To address these objectives, we focused on U.S. agencies’ PFM capacity-building activities since 2009, when President Obama concluded the administration’s review of U.S. efforts in Afghanistan and announced a strategy for Afghanistan and Pakistan. We reviewed documents and records from the U.S. Departments of Defense (DOD), State (State), and the Treasury (Treasury), and the U.S. Agency for International Development (USAID). We selected these agencies because they provide guidance or assistance related to U.S. PFM capacity building for the Afghan government. We also reviewed documents from the Afghan government and other donors, such as the World Bank. We interviewed officials from DOD, State, Treasury, and USAID in Washington, D.C., and in Kabul, Afghanistan. Additionally, in Kabul, we interviewed officials from other international donors providing PFM assistance, such as the World Bank and the United Kingdom, as well as Afghan government officials from organizations including the Ministry of Finance (MOF), Ministry of Defense (MOD), Ministry of Public Health, and the Independent Administrative Reform and Civil Service Commission. To inventory and describe U.S. government efforts to build the Afghan government’s PFM capacity, we interviewed officials from U.S. agencies, including USAID, Treasury, and DOD, and reviewed relevant documents. We also compared U.S. agency efforts with the key components of the Public Financial Management Roadmap (Roadmap), which represents goals agreed upon by donors and the Afghan government to improve the Afghan government’s PFM capacity. For USAID PFM capacity-building work we reviewed documents— including base contracts, contract modifications and scope of work— that describe USAID-funded PFM capacity building work in Afghanistan. To identify USAID PFM capacity-building projects since 2009, we interviewed officials from USAID’s Office of Democracy and Governance, and Office of Economic Growth. Some USAID projects are divided into major areas of focus called components. Two USAID projects—Economic Growth and Governance Initiative (EGGI) and Afghanistan Civil Service Support (ACSS)—have at least one component related to building the PFM capacity of Afghan government entities at the national and provincial levels. We selected these projects for a more detailed review. Some projects did not have a component focused on building the Afghan government’s capacity in budget formulation, budget execution, or transparency and accountability. However, we included them in our inventory because USAID officials noted that these projects contribute to increasing the Afghan government’s PFM capacity. We decided to focus on projects that primarily build PFM capacity at the national and provincial levels because the Roadmap prioritizes capacity building at these levels. To describe the PFM efforts of Treasury officials, we reviewed documents including Treasury’s 2009 assessment of the need for advisers to assist the Afghan government with public-expenditure management and a summary of the roles of advisers currently at MOF. In Kabul, we also interviewed the Treasury advisers and met with them at MOF. To describe DOD’s PFM capacity-building work, we reviewed documents including organizational charts and staffing levels, policy guidance and operating procedures, and various implementation, development, and evaluation plans for both MOD and Ministry of Interior (MOI). For information on the role of mentors and advisers in building capacity at MOD and MOI, we interviewed senior DOD officials at the Combined Security Transition Command–Afghanistan (CSTC-A), who were providing the services to both MOI and MOD, as well as MOD Finance officials who were receiving these services. In addition, we reviewed MOD and MOI training manuals, course materials, and adviser guides developed specifically for mentoring and capacity building in these ministries. We also reviewed the contracts providing mentoring and training services to MOD and MOI. To assess alignment of U.S. efforts and goals with the PFM goals of international donors and the Afghan government, we compared efforts undertaken by USAID, Treasury, and DOD with the key components of the Roadmap. We also reviewed U.S. plans and guidance issued since 2009, including the U.S. Foreign Assistance for Afghanistan Post Performance Management Plan (2011-2015) and U.S. Mission guidance to U.S. agencies on PFM issues in Afghanistan. For U.S. military efforts we reviewed goals stated in CSTC-A Ministerial Development Plans for finance and budget functions for MOD and MOI. We also interviewed U.S. officials from USAID, Treasury, and CSTC-A to corroborate the extent to which the Roadmap guided their efforts in working with Afghan ministries. Additionally, we obtained relevant documentation from MOF officials, including the implementation plan and a 6-month progress report related to the Roadmap. To assess U.S. efforts within the broader context of other donor efforts, we interviewed donors including the World Bank and United Kingdom’s Department for International Development (DFID) in Kabul. We also obtained documents, such as the Roadmap Implementation Plan and Technical Assistance Summary Reports, and interviewed officials from the World Bank and MOF. To assess the results of U.S. PFM efforts, we analyzed relevant documentation from U.S. agencies. For USAID, we analyzed the performance plans for the Mission as well as for PFM-specific projects. Additionally we analyzed the results reported in project evaluations as well as monthly, quarterly, and annual reports for the two main PFM-focused projects. USAID evaluations were based on document reviews as well as interviews with contractors, USAID officials, and Afghan government officials. Additionally, the evaluation for EGGI included surveys of and interviews with program beneficiaries (e.g., Afghan officials in budget units). We found these evaluations to be generally reliable to provide information on the results of specific activities conducted under EGGI and ACSS for the period for which the evaluations were conducted. From the Mission Performance Management Plan (PMP), we identified objectives and associated indicators that pertain to PFM. We then reviewed the associated data provided by USAID that complemented the results framework noted in the Mission PMP. We identified whether USAID data included fiscal year 2011 targets for the PFM indicators we had identified in the PMP. In analyzing USAID project-performance indicators, we reviewed the most current performance data from USAID and its implementing partners. Target and performance data were not available for the first quarter of fiscal year 2010 for ACSS and EGGI because ACSS began operations in the second quarter of fiscal year 2010; although EGGI started in the first quarter of fiscal year 2010, USAID officials reported that they did not expect performance targets and data for this quarter because the project was being set up. As of the date of this report’s publication, no data were available for the fourth quarter of fiscal year 2011, so our analysis does not include data from that quarter. As part of our analysis of USAID project-performance indicators, we reviewed the most current performance data provided by USAID and its implementing partners. We also interviewed USAID officials and the contractor to obtain clarifying information about performance data. Due to unexplained changes and gaps in some target and performance data, we could not verify the reliability of all performance data reported by USAID. Additionally, we obtained documentation and corroborated the effectiveness of U.S. efforts during meetings with Afghan officials from relevant governmental organizations, such as the Independent Administrative Reform and Civil Service Commission. At the commission’s training institute, we met with contractors who provide training to Afghan civil servants as part of the ACSS project. To assess the results of the efforts of Treasury advisers, we reviewed a sample of adviser monthly reports as well as advisers’ assessment for October 2009 to September 2010, the latest available report for a complete fiscal year. We also interviewed officials from Treasury’s Office of Technical Assistance to obtain information on the adviser assessment and reporting process. Moreover, in Afghanistan we met with MOF officials and contractors in MOF’s budget, treasury, and internal audit offices, and we obtained their input regarding USAID’s and Treasury’s efforts and results. We assessed data obtained for our analysis to be sufficiently reliable to provide an overall assessment of the extent to which Treasury advisers’ assistance has contributed to MOF’s PFM capacity. We did not independently assess the capability of MOD and MOI finance offices; rather, we relied on CSTC-A’s capability milestone ratings, which are used to measure the capability level of a specific area or department. We spoke with cognizant CSTC-A officials about the reliability of these ratings and also reviewed documentation about the ratings formulation process. In addition, CSTC-A officials provided documentation that showed modifications in their rating process to make it more rigorous. For example, in 2010, CSTC-A introduced interim ratings, which it believed provided greater detail and a more gradual transition from requiring significant coalition assistance to being capable of fully autonomous operations. We believe that the capability milestone ratings are sufficiently reliable to measure the extent of progress in finance and budget functions at MOD and MOI. Additionally, we analyzed finance- and budget-related development and evaluation plans for MOD and MOI, quarterly assessment briefings, Ministerial Development Board presentations, internal control finance reports, and other adviser reports. We compared quarterly assessments and projected capability milestone ratings over time to identify the extent to which DOD was meeting its targets. We also visited MOD’s Finance office in Kabul and interviewed Afghan officials to obtain their input into U.S. efforts to build their ministry’s capacity in finance and budget. Since 2002, the United States has allocated more than $72 billion for security, governance, and development to Afghanistan (see table 6). According to U.S. agency officials and documents, the budget process in Afghanistan is highly centralized, with the Afghan national government responsible for developing and executing the country’s budget. According to the Department of the Treasury (Treasury), none of the 34 Afghan provinces has the authority to raise revenues or spend public funds. Budgets are developed and executed through central ministries and their provincial offices, called provincial directorates. The provincial governor, appointed by the Afghan President, and the Provincial Council, whose members are elected, can influence the national budget and indicate priorities through the Provincial Development Committee, which also includes representatives from provincial directorates, according to Treasury officials. The ministries then formulate specific budgets and convey them to Ministry of Finance (MOF), which develops the overall national budget. The Afghan parliament, called the National Assembly, has the authority to approve or reject the national budget in its entirety, but cannot make changes to it during the final review process, according to documentation from the U.S. Agency for International Development (USAID). Figure 7 provides an illustration of Afghan budget formulation, execution, and audits at the national and provincial levels. According to Treasury officials, once the National Assembly approves the budget, ministries prepare and submit allotment requests for the operating budget. MOF establishes quarterly allotments for the operating budget for each ministry. For the development budget, each ministry can submit a budget allotment request only if there is an approved contract (or legal binding agreement) for the project, according to the Treasury adviser. MOF must approve the allotment request before the budget allotment is established. The external budget is not subjected to the budget allotment process. The central ministries and their provincial offices implement programs and execute funds. However, MOF and its provincial offices, called mustofiats, issue payments in response to requests from ministries. Audit of expenditures to ensure proper use of public funds is the responsibility of the Control and Audit Office— Afghanistan’s supreme audit institution—and the audit offices of central ministries, according to Treasury officials. The Combined Security Transition Command–Afghanistan (CSTC-A) has established a process to regularly assess and rate the capability of Ministries of Defense (MOD) and Interior (MOI) in various areas including finance and budget. The CSTC-A assessment process uses the objectives in the Ministerial Development Plan as criteria to assess progress in capacity building at the ministries. For example, CSTC-A advisers at MOD provide quarterly assessments, through Narrative Assessment Worksheets, in which they assign a rating and provide a description of progress and challenges associated with each objective. Additionally, MOD advisers complete an online survey to provide information about the scope and quality of their interaction with their Afghan counterparts on a quarterly basis. According to CSTC-A officials, the MOI assessment and rating is based on interviews with advisers and Afghan officials, internal quarterly assessments, and other officials’ reporting on a quarterly basis. Figure 8 shows CSTC-A’s assessment process for MOD. Senior CSTC-A officials review input from the advisers and agree on an overall rating. This rating and associated details are presented in a quarterly assessment that includes highlights of progress towards the objectives, key objectives to focus on in the next quarter, and an analysis of the ministry’s strengths, weaknesses, opportunities, and threats. These assessments, which are available in English and Dari, are presented to senior coalition forces and Afghan government officials as part of the Ministerial Development Board process. This review process has three key goals: (1) to institutionalize the formal review and oversight of the separate Ministerial Development Plans for various critical ministry functions (see table 7 for areas of focus for MOD and MOI capacity building), (2) to assess progress toward achieving fully autonomous operations including addressing key impediments, and (3) to reinforce and set the current and following quarter’s objectives and initiatives. Although CSTC-A does not currently use adviser surveys for MOI, it is in the process of developing and implementing these, according to CSTC-A officials. The figures below show fiscal years 2010 and 2011 performance data for U.S. Agency for International Development (USAID) projects that focus significantly on public financial management (PFM) capacity building at the national and provincial level. Major contributors to this report were Tetsuo Miyabara, Assistant Director; Mary Koenen; Bruce Kutnick; Mona Sehgal; and Eddie Uyekawa. Technical assistance was provided by Ashley Alley, Pedro Almoguera, Emily Gupta, Jeffrey Isaacs, Gergana Danailova-Trainor, Karen Deans, Denise Fantone, Etana Finkler, Jacqueline Nowicki, Esther Toledo, and Pierre Toureille. | The United States has allocated over $72 billion to Afghanistan since 2002. With other international donors, it is focused on transitioning leadership to the Afghan government and has pledged to provide at least 50 percent of its development aid through the Afghan government budget. Improving Afghanistan's public financial management capacity is critical to this transition. In 2010, the Afghan government, consulting with donors, issued a Public Financial Management Roadmap (Roadmap), which outlines goals to improve Afghanistan's capacity to develop a national budget and expend funds. GAO reviewed (1) U.S. efforts to improve the Afghan government's public financial management capacity, including the extent to which they support Roadmap goals, and (2) the extent to which U.S. efforts have improved the government's capacity. GAO reviewed documents and interviewed officials from the U.S. Agency for International Development (USAID); Departments of State, Defense (DOD), and the Treasury (Treasury); World Bank; and Afghan government in Washington, D.C., and Kabul, Afghanistan. USAID, Treasury, and DOD support the Public Financial Management Roadmap (Roadmap) goals through various activities such as (1) USAID projects that provide technical assistance and training to Afghan civil servants, (2) Treasury advisers' assistance to the Ministry of Finance (MOF), and (3) DOD's Combined Security Transition Command-Afghanistan (CSTC-A) that provides support to the Ministries of Defense (MOD) and Interior (MOI). GAO found that these efforts are aligned with the Roadmap goals. USAID provides training and technical assistance mainly through two contractor-implemented projects. One USAID project provides technical assistance to 37 civilian ministries to develop their annual budgets, while another USAID project provides training in areas such as financial management and procurement to Afghan civil servants. Treasury provides technical assistance through 6 advisers in MOF, who work with senior officials on issues such as budget execution. Through CSTC-A, DOD has 22 advisers at MOD and MOI, who advise officials on developing their budgets and strengthening the payroll system to improve accuracy. The overall extent to which U.S. efforts have improved the public financial management capacity of the Afghan government cannot be fully determined because (1) U.S. agencies have reported mixed results, and (2) weaknesses in USAID's performance management frameworks, such as lack of performance targets and data, prevent reliable assessments of its results. USAID's evaluations of its two public financial management projects indicate that some activities were successfully completed, while others were terminated because these activities were not deemed useful. Treasury advisers assessed that although their assistance at MOF had a positive effect, some results had limitations. For example, advisers assessed that their efforts to design reports for improved communication of financial information were not as successful as they had expected. Additionally, CSTC-A assessed that while MOD has made progress since 2008 and can perform critical financial management functions with minimal international support, MOI still needs significant international support for such operations. In early 2010, CSTC-A projected that MOD would transition to needing no coalition support for finance and budget functions by January 2012, and MOI would reach a similar goal by March 2012. However, in early 2011, CSTC-A extended time frames for meeting its benchmarks for MOD and MOI to March 2012 and November 2012, respectively. Regarding deficiencies in USAID's performance management framework, both the USAID Mission performance management plan and project-specific plans lack performance targets as required for each indicator related to public financial management. Additionally, implementing partners, such as contractors, have not consistently reported performance data for all indicators. Moreover, baselines for public financial management capacity of civilian ministries have not yet been established. In the absence of baselines, performance targets, and data, it is difficult to assess the extent to which USAID efforts have increased the public financial management capacity of Afghan ministries. GAO recommends that the USAID Administrator take steps to (1) establish performance targets in its Mission Performance Management Plan (PMP); (2) ensure implementing partners' PMPs include baselines and approved targets; and (3) ensure implementing partners routinely report performance data. USAID concurred with GAO recommendations and is taking steps to address them. |
The Dodd-Frank Act, which established CFPB, grants the agency authority to develop rules aimed at protecting consumers in the financial products and services marketplace. In particular, the Dodd-Frank Act requires that CFPB consider the potential benefits and costs to consumers and covered persons, impacts of proposed rules on small banks and credit unions, and impact of proposed rules on consumers in rural areas. The act also requires CFPB to seek input from small entities during the rulemaking process for certain proposed rules. Specifically, when CFPB conducts a rulemaking that it determines will have a significant economic impact on a substantial number of small entities, CFPB must convene a SBREFA panel to seek direct input from small entities before issuing proposed rules for public comment, in addition to its normal rulemaking outreach to the public. This requirement extends beyond the typical rulemaking process, established under the Administrative Procedures Act, which encompasses the publication of most rules in the Federal Register and a period for public comment. The SBREFA panel requirement applies to CFPB, the Environmental Protection Agency, and the Occupational Safety and Health Administration. Once convened, the SBREFA panel has 60 days to solicit input from small entities on a draft proposal of a rule and report this input as well as its findings in a panel report. The Regulatory Flexibility Act, as amended by the Dodd-Frank Act, requires CFPB to consider the input of small entity representatives, which is reflected in the panel report, when drafting its proposed rule. See figure 1 for an overview of CFPB’s rulemaking process and the SBREFA panel process. The SBREFA panels are chaired by CFPB and include other government agency representatives from the Small Business Administration’s (SBA) Office of Advocacy and the Office of Management and Budget’s (OMB) Office of Information and Regulatory Affairs. Once CFPB has made the determination to convene a SBREFA panel, it must identify the appropriate number and mix of small entity representatives for the purpose of obtaining advice and recommendations from the individuals about the potential impacts of the proposed rule. These representatives are to be selected from small businesses, not-for-profit organizations, and government jurisdictions. Specifically, the Regulatory Flexibility Act, as amended by the Dodd-Frank Act, requires that CFPB identify these small entity representatives in consultation with the Chief Counsel for Advocacy of SBA. A small business must meet certain statutory definitions and SBA size standards to be eligible to participate in the SBREFA panel process. Before meeting with small entity representatives, CFPB, in collaboration with SBA’s Office of Advocacy and OMB’s Office of Information and Regulatory Affairs, develops an information package to share with small entity representatives. These materials typically contain information about the background and requirements for the proposed rule, an overview of the proposed rule (including a preliminary assessment of the potential impacts), and any alternatives under consideration, as presented by CFPB. CFPB also provides small entity representatives with an agenda and a list of potential discussion topics for the panel meeting, a fact sheet and other verbal and written information about their role in the process, and questions posed to small entity representatives on the impacts of the proposed rules for the meeting. CFPB officials also mentioned that they sometimes observed operations relevant to the rulemaking at a financial services organization. When CFPB convenes a SBREFA panel, the panel must assess certain impacts of proposed rules before their release for public comment as a Notice of Proposed Rulemaking. Accordingly, topics of discussion for the panel meeting with small business representatives address subject areas that CFPB is required to assess in its rulemakings. In particular, CFPB must prepare an Initial Regulatory Flexibility Analysis with the following required elements: a description and estimate, where feasible, of the number of small entities to which the proposed rule will apply; a description of the projected reporting, recordkeeping, and other compliance requirements of the proposed rule; an identification, to the extent practicable, of all relevant federal rules which may duplicate, overlap, or conflict with the proposed rule; a description of any significant alternatives to the proposed rule which accomplish the stated objectives of applicable statutes and which minimize any significant economic impact of the proposed rule on small entities; a description of any projected increase in the cost of credit for small entities (and, if so, any significant alternatives to the proposed rule that accomplish the stated objectives of applicable statutes and minimize any increase in the cost of credit for small entities); and a description of the advice and recommendations of representatives of small entities relating to the issues described above. Subsequent to the panel meeting with small entity representatives, the panel must prepare a report that summarizes the input of the small entity representatives and recommendations of the panel members. Accordingly, CFPB collaborates with SBA’s Office of Advocacy and OMB’s Office of Information and Regulatory Affairs to prepare the report, which is required to be completed within 60 days after the panel convenes. It is publicly released in conjunction with the release of the proposed rule as a Notice of Proposed Rulemaking in the Federal Register. Once the proposed rules are publicly released, another comment period is open to any interested parties, organizations, and the public in general. After the comment process is complete for a proposed rule and CFPB has decided to finalize the rule and continues to find that the proposed rule will have a significant economic impact on a substantial number of small entities, CFPB then must prepare a Final Regulatory Flexibility Analysis in conjunction with the final rule. This final analysis must include the following elements: description and estimate of the number of small entities to which the proposed rule will apply or an explanation of why no such estimate is available; description of the projected reporting, recordkeeping, and other compliance requirements of the proposed rule; description of the steps taken to minimize the significant economic impact on small entities consistent with the stated objectives of applicable statutes, including reasons for selecting the alternative adopted and rejecting other significant alternatives; and description of the steps taken to minimize any additional cost of credit for small entities. The Office of Inspector General (OIG) for the Board of Governors of the Federal Reserve System and the Consumer Financial Protection Bureau has reviewed certain aspects of CFPB’s rulemaking process, including the SBREFA panel process. In its September 2014 report, the OIG concluded that CFPB complied with Section 1100G of the Dodd-Frank Act in its operation of the SBREFA panels and rulemaking processes. The OIG also found that CFPB’s interim policies and procedures had been in use for approximately 2 years without being updated or finalized. The interim policies had afforded CFPB staff significant discretion in their rulemaking approach to regulatory analysis, which contributed to a variance in documentation and inconsistent knowledge transfer practices. At the time of the OIG review, CFPB used interim guidance to detail the agency’s rulemaking process, which included utilizing the SBREFA process as required under the Dodd-Frank Act. CFPB used the interim guidance in the development and issuance of the four rules we reviewed, all of which were published in the Federal Register as proposed rules for public comment prior to September 2014. The OIG made recommendations for CFPB to finalize its interim guidance documents and enhance data repository measures. CFPB finalized its internal guidance documents in 2014. As of March 2016, the OIG noted that CFPB had agreed with its recommendations. OIG staff also told us that corrective actions were underway and that the recommendations remained open as they awaited further documentation from CFPB to close out the recommendations, as of June 2016. CFPB, in collaboration with SBA’s Office of Advocacy and OMB’s Office of Information and Regulatory Affairs, accomplished required steps for conducting the four panels that we reviewed—including soliciting the input of small entity representatives at panel meetings. In addition, before the panel discussions CFPB worked with SBA’s Office of Advocacy to identify and select candidates to be small entity representatives. CFPB, SBA’s Office of Advocacy, and OMB’s Office of Information and Regulatory Affairs also worked collaboratively to develop and send materials to small entity representatives before panel meetings. The materials included information on the draft proposed rule (with a preliminary assessment of the potential impacts), discussion questions, and the SBREFA process. Based on our analysis of testimonial and documentary evidence, we found evidence of collaboration between CFPB and SBA’s Office of Advocacy concerning the selection of small entity representatives. In doing so, we noted that different representatives appeared in each of the four panels. We also found that CFPB worked with trade associations to identify potential candidates for small entity representatives and further vetted organizations to help ensure that a variety of entity types were represented at the panel meeting, including those of varying sizes and from different geographic areas. The time frames afforded to small entity representatives to provide input before panel meetings generally increased over time (see fig. 2). CFPB’s actions occurred within the context of a Regulatory Flexibility Act requirement, to complete panel reports 60 calendar days after the panels were convened. CFPB defines “convened” as the date on which CFPB, SBA’s Office of Advocacy, and OMB’s Office of Information and Regulatory Affairs formally established the panel, not the date on which the panel meeting with small entity representatives occurred. In earlier panels, CFPB either convened the panels shortly before providing representatives with materials or concurrently. Then, representatives had from 10 to 11 business days to review materials before panel meetings. For the most recently completed panel we reviewed (HMDA), CFPB first sent out materials and allowed 18 business days for representatives to provide input before the panel meeting. During this period, CFPB hosted teleconferences with representatives to help prepare them for the panel meeting. Then CFPB held the meeting 5 business days after convening the panel. CFPB officials said that their process has evolved to extend time spent on outreach to small entity representatives; specifically, they conducted more outreach with small entity representatives prior to convening the panel in connection with those regulations for which the Dodd-Frank Act did not mandate a specific issuance date (as with the HMDA panel). CFPB officials explained that the agency had less flexibility on the earlier panels because of the time frames required by the Dodd-Frank Act for the completion of the associated rules. Similarly, the time allowed for small entity representatives to provide comments after the panel meetings (which were all-day meetings) varied. For instance, on the first panel to discuss the rule on the TILA-RESPA Integrated Disclosure, the panel provided an additional 5 business days for representatives to submit comments after the panel meeting. Therefore, representatives were given a total of 16 business days to review materials and provide input. For the HMDA rule, CFPB provided an additional 10 business days after the panel meeting for representatives to provide comments, giving representatives a total of 29 business days to review materials, meet with the panel, and provide inputs to the rulemaking. CFPB, SBA’s Office of Advocacy, and OMB’s Office of Information and Regulatory Affairs also collaborated to prepare panel reports as required. The panel reports we reviewed summarized the topics addressed during the panel discussions as well as the recommendations from the panels. These topics were consistent with the elements of the Initial Regulatory Flexibility Analysis that rulemakings under RFA, including those in the SBREFA process, must address. The panel reports also included written comments by small entity representatives within the appendices of the reports. The agencies generally completed the reports within the deadline of 60 days after the panel was convened. In preparing the panel reports, CFPB and SBA’s Office of Advocacy noted the challenges presented by the 60-day deadline. During this period, the panel sends materials to small entity representatives with time for review, conducts a formal meeting with the small entity representatives to gather their input, and provides time for the representatives to provide written comments before drafting the panel report. As previously discussed, CFPB considers a panel to be convened on the date it was established, not the date it met. Therefore, the later the panel meets with the small entity representatives, the more challenging it becomes to prepare the panel report within the deadline. After the panel report is prepared by the three agencies, CFPB makes it publicly available as part of its Notice of Proposed Rulemaking in the Federal Register. For the rulemakings we reviewed, the panel reports were publicly released approximately 2–4 months after the reports were completed and approximately 4–6 months after the panels were convened. CFPB addressed required elements for Initial and Final Regulatory Flexibility Analyses under the Regulatory Flexibility Act, as amended by the Dodd-Frank Act, and also for more general rulemaking requirements. Based on our review, CFPB’s Notices of Proposed Rulemaking that accompany the four proposed rules incorporated the required elements of an Initial Regulatory Flexibility Analysis. Similarly, the Notices of Final Rulemaking associated with these four rules addressed the required elements in the Final Regulatory Flexibility Analysis. Each of the Notices of Proposed Rulemaking also included discussion of the required elements that apply to CFPB rulemakings generally, beyond the Regulatory Flexibility Act requirements. The elements are: the potential benefits and costs to consumers and covered persons; impacts of proposed rules on depository institutions and credit unions with $10 billion or less in total assets; and impact of proposed rules on consumers in rural areas. Although CFPB addressed the general requirements, CFPB acknowledged that its assessments of impacts on consumers in rural areas in the proposed and final rules were sometimes not fully known because of limited information. In such cases, CFPB generally stated that it had limited information on such impacts and would continue seeking information and data on the impacts. Our review of the rulemaking documents indicates that the discussion of the rule proposals and alternatives in panel reports, and the Notices of Proposed Rulemaking, focused on reactions to the proposals and alternatives that CFPB presented. In cases in which the documents included alternatives put forth by small entity representatives, the alternatives proposed by small entity representatives were typically focused on who should be exempt from a proposed rule or how much time should be allowed to implement it. The Regulatory Flexibility Act requires that each CFPB Initial Regulatory Flexibility Analysis include, among other things, a description of any significant alternatives to the proposed rule which accomplish the stated objectives of applicable statutes and which minimize any significant economic impact of the proposed rules on small entities and any increase in the cost of credit for small entities. CFPB makes judgments about which alternatives meet this requirement. The following examples illustrate the CFPB focus in the rulemaking documents that we reviewed: Panel materials: As described earlier, panel materials include an overview of the proposed rule and any alternatives under consideration, as presented by CFPB. In the panel materials disseminated for each of the four rulemakings, CFPB presented its “Outline of Proposals under Consideration and Alternatives Considered.” Consistent with its internal guidance, CFPB developed the proposals and alternatives presented in these documents to facilitate inputs from small entity representatives. For example, in the panel materials for the TILA-RESPA Integrated Disclosure rulemaking, CFPB presented alternative prototypes for the “Loan Estimate” and “Settlement Disclosure” documents. According to CFPB, testing on these alternative prototypes with consumers was completed in January 2012, before the SBREFA panel met with representatives in March 2012. Panel reports: The panel reports generally comprehensively discussed different aspects of proposals and alternatives presented by CFPB. In reviewing the alternatives in the panel reports, we found that the discussions also principally addressed the proposed rules and alternatives presented by CFPB in the panel materials. Accordingly, the majority of comments from representatives in the panel reports were focused on their reactions to CFPB’s proposed rules and alternatives. Furthermore, some of the comments reflected discussions over which entities should be exempt from the proposed rules and time frames to implement them. For instance, on the HMDA rulemaking, representatives conveyed their positions on the appropriate threshold for exemption from certain reporting requirements. CFPB officials said that it was not often the case that small entity representatives offered a large volume of alternatives. The SBREFA panel republishes the small entity representatives’ written comments in an appendix of the panel report. In some cases, panel reports include summaries of small entity representatives’ ideas for alternatives to CFPB’s proposed rules—that is, the discussion ranged beyond numeric exemption thresholds and implementation time frames. For example, in the report on Mortgage Loan Originator Compensation, small entity representatives asserted that the economic costs of origination vary with the loan balance, and therefore, a flat loan origination fee was unsuitable. Proposed rules: The Notices of Proposed Rulemaking contained discussion and assessment of CFPB’s proposed alternatives. We found that some alternatives posed by representatives as part of the SBREFA panel process were discussed and assessed in the proposed rule and others were not. For example, for the proposed rule on Mortgage Loan Originator Compensation, CFPB discussed an alternative proposal from small entity representatives in the section of the proposed rule discussing significant alternatives. We also observed that CFPB often used the significant alternatives section of the Notice of Proposed Rulemaking to further elaborate on its own proposed rules. In an example of an alternative not presented in the significant alternatives section of the proposed rule, small entity representatives for the HMDA rulemaking sought an alternative for “CFPB to limit the addition of data points to those mandated by the Dodd-Frank Act and only as necessary to meet the HMDA purposes” to address concerns about the burdens and costs associated with new data points, particularly those not specifically enumerated in the Dodd-Frank Act. In its notice of proposed rulemaking, CFPB acknowledged concerns about the proposals to add new data points to the HMDA reporting requirements, but did not explicitly present the alternative offered by small entity representatives to limit the addition of new data points. During the comment period for this proposed rule, SBA’s Office of Advocacy commented on this alternative emphasizing that these data points were not statutorily required and urged CFPB to exempt small entities from collecting such data points until CFPB had the opportunity to determine whether the additional information furthered the goals of HMDA. As required by the RFA, CFPB responded to the comments of the Office of Advocacy, including its comment on discretionary data points, in its Final Regulatory Flexibility Analysis, which was published in the final HMDA rule. Other interested parties can identify or comment on alternatives as part of the proposed rule comment period. CFPB officials emphasized that they solicited alternatives from small entity representatives, but were not required to list in the Notice of Proposed Rulemaking all alternatives offered by the small entity representatives. They only had to include those that they deemed significant and consistent with applicable statutes of the proposed rule. The officials also noted that data needed to make a fuller assessment of some alternatives from small entities were not always available. In developing the panel materials, CFPB officials stated that the agency sought to balance the need to develop enough information for consideration before the panel convened against providing such complete information that it would appear the agency had reached a conclusion on the content of the rule. As mentioned earlier, CFPB and SBA’s Office of Advocacy officials emphasized the challenge of completing the panel report within 60 days of convening the SBREFA panel. At the same time, CFPB officials stated that the public, including trade associations and small entities themselves, had opportunities to comment and provide additional information on the proposed rule after the SBREFA process was completed during the public comment period. Small entity representatives’ views on the panel process were generally positive, but they also suggested areas for improvement. When asked their overall views on the SBREFA process (question 29 of structured interview, see app. II), 25 of 57 representatives we interviewed said the SBREFA process was good, 20 stated that they were glad to have served as small entity representatives, and 18 said the process was a good opportunity to be heard. For instance, 1 representative said her voice was heard and she would participate on another SBREFA panel if asked in the future. Another said the SBREFA panel was a learning process for CFPB and he would “jump at the opportunity” to be a part of it again. Conversely, 13 of the 57 representatives stated that they felt CFPB treated the process as a formality. For example, 1 said CFPB was good at following processes but felt that it did not listen to input. He added that he felt CFPB’s mind was made up before the panel took place. Another felt the panel was more symbolic than meaningful—there was no reflection of input from small entity representatives in the rule and that representatives were not given a valid role in the rulemaking. Furthermore, 7 representatives felt the process was hindered by CFPB’s lack of knowledge of their industry. For example, one said CFPB staff did not have enough practical experience and during the panel meeting there was limited time to talk about the actual rule because small entity representatives had to explain certain banking processes to CFPB. Finally, 15 representatives felt the SBREFA process could be improved. For instance, one said she would have liked more detailed discussion with live beta or mock-up testing with her operational people so they could test some of the things CFPB proposed. She believes this would have produced a better rule. Another said there should have been a second panel meeting after publishing the proposed rule to discuss the topics again and so that representatives could better evaluate it. This representative further noted that with two panels, during the first panel CFPB could focus on closing its knowledge gaps in relation to how industries operate their businesses. As discussed previously, CFPB conducted outreach efforts to prepare small entity representatives to provide constructive input during the panel meeting and the efforts varied by panel (see table 1). For all panels, CFPB provided the representatives with materials that included the draft proposal for rulemaking, discussion questions, and a fact sheet describing the SBREFA process. For the panels on the TILA-RESPA Integrated Disclosure and Mortgage Servicing rules, CFPB also provided representatives with a draft agenda for the panel meeting and the opportunity to comment. For the HMDA panel only, CFPB conducted two sets of teleconferences before the panel meeting to discuss elements of the draft proposal for rulemaking. Of the 57 small entity representatives we interviewed, 31 believed CFPB’s outreach efforts prepared them to provide constructive input during the SBREFA panel meeting (question 6) and 15 said CFPB efforts partially prepared them (see table 2). The HMDA panel had the greatest share of representatives who said CFPB’s outreach efforts prepared them to provide constructive input and the Mortgage Loan Originator Compensation panel had the smallest share. Seven representatives from the HMDA panel specifically mentioned the teleconferences were helpful when asked about CFPB’s outreach efforts. Furthermore, when asked how were CFPB’s outreach efforts constructive or not constructive (question 7), 12 small entity representatives stated they needed more time to prepare for the panel. For example, responses from representatives included not enough time to prepare responses to the information CFPB requested; not enough time to reach out to other businesses and suppliers to gauge the proposal’s impacts; and not enough time to perform their day-to-day duties at their companies when preparing for the panel. CFPB officials stated that the role of small entity representatives does not include reaching out to other businesses or entities. As discussed previously, the time CFPB had available for outreach in the first three panels was constrained by the deadlines imposed by the Dodd-Frank Act to promulgate implementing regulations. When asked how CFPB’s outreach could be improved (question 8), 10 of 57 small entity representatives suggested CFPB obtain more knowledge of industry practices before convening the panels. For example, one representative believed CFPB was surprised by answers representatives provided to their questions because the agency lacked real world experience; the representative suggested CFPB do site visits with typical small entities to become better informed. Another representative said CFPB did not know what entities regulated its business or its reporting requirements, and that CFPB did not understand day-to-day operations in their line of business. CFPB officials stated they were diligent in their desire to understand business practices so they focused their attention on these areas during outreach. In addition, 7 of 57 said CFPB could provide better guidance about the role of small entity representatives in the process or what to expect at the panel meeting. For example, one representative believed some comments from small entity representatives at the panel meeting were not helpful because expectations were not set, including the type of input CFPB was seeking. As discussed previously, CFPB provides written and verbal guidance to representatives on what to expect in the panel and their role in the SBREFA process. As part of CFPB’s outreach efforts, they provided materials to the small entity representatives, including a draft proposal of the rule (with a preliminary assessment of the potential impacts), discussion questions for the panel meeting, and a fact sheet on the SBREFA process. Of the 57 small entity representatives we interviewed, 44 stated the CFPB materials prepared them to provide constructive input at the panel meeting (question 11). Also, 33 stated CFPB provided these materials with time for sufficient review before the panel (question 10). Although most representatives responded favorably regarding the materials and the usefulness of the materials, 17 commented that the materials omitted some information or guidance that could have better prepared them to participate in the panel meeting (question 12). For example, one small entity representative suggested CFPB include examples of the types of impacts for which they were looking and another suggested CFPB include how it calculated cost increases. Although 38 of 57 small entity representatives stated CFPB had selected participants who represented their respective industries (question 14), most small entity representatives on the Mortgage Loan Originator Compensation panel did not believe their industry was well represented (see table 3 below). This sentiment was consistent across the industry representatives we interviewed. For this panel, the mortgage broker industry had the most representatives (7 of 17) of the five industries represented (the other four groups were commercial banks, credit unions, mortgage companies, and nonprofit housing organizations). However, several mortgage brokers believed their industry was not well- represented. For example, one mortgage broker said his industry should have had more representation because of the effects the rule has had on mortgage brokers. In contrast, representatives from another industry said their industry was not well represented because so many mortgage brokers were on the panel. As discussed previously, CFPB works with SBA’s Office of Advocacy to determine the appropriate number of panelists to represent each industry. In our interviews, we asked the 57 small entity representatives if enough time had been provided during the meetings of the SBREFA panels to collect their advice and recommendations on the following topics (question 15): a. Applicability. A description and estimate of the number of small entities to which the proposed rule will apply. b. Compliance. A description of the projected reporting, recordkeeping and other compliance requirements of the proposed rule, including an estimate of the classes of small entities which will be subject to the requirement and the type of professional skills necessary for preparation of the report or record. c. Conflicting rules. An identification of all relevant federal rules which may duplicate, overlap or conflict with the proposed rule. d. Alternatives. Significant alternatives to the proposed rule that minimize any significant economic impact on small entities and that minimize increases in the cost of credit for small entities. e. Cost of credit. Any projected increase in the cost of credit for small entities. f. Other topics. Any additional topics related to the rulemaking. Representatives had varying recall about the specific topics of the panel discussions and their views about them (the panel meetings were held 2– 4 years before our interviews). Therefore, their responses should be viewed with caution. More representatives were able to recall discussions about compliance (question 15b) and alternatives (question 15d). Of the 57 small entity representatives, 38 mentioned that there was not sufficient time for discussion for at least one of the six topics at panel meetings compared to 2 who said there was time to discuss all topics. Figure 3 illustrates the small entity representatives’ responses. Furthermore, 19 stated that at least one topic important to their business or industry was not discussed during the panel meeting (question 16). For example, one representative noted that they did not discuss how the rule would impact costs for consumers. CFPB officials stated that at the end of each panel meeting representatives were given a final opportunity to bring up any issues they believed were not focused on or given enough attention. When asked how CFPB’s conduct of the panel meetings could be improved (question 17), 19 representatives suggested more time or additional meetings would improve the process. A majority of small entity representatives that we interviewed said that CFPB at least partially reported their views accurately (question 19) and appeared to at least partially take them into consideration during rulemaking (question 21). Of the 57 small entity representatives we interviewed, 25 stated their views were accurately characterized in the panel report. For instance, one said that she thought CFPB represented her views verbatim. Another stated she was glad to see her written comments included in the report. Twelve representatives stated the characterization of their views in the panel report was partially accurate. For instance, some representatives (5 of 12) said CFPB did not characterize some of their views with proper detail or tone. Six representatives stated their views were not accurately characterized in the panel report. For instance, one of the six said she felt CFPB wrote the report before the panel took place. Fourteen representatives did not remember seeing the panel report or did not recall how their views were characterized in it. As discussed previously, the representatives’ written comments were published in the appendixes of the panel reports. Most of the 57 representatives felt CFPB at least partially considered their views, concerns, and suggestions in its rulemaking (question 21). Seventeen of 57 representatives believed CFPB considered their views, concerns, and suggestions. One of the 17 said he believed that CFPB considered his views but did not implement them in the rulemaking. Another said CFPB took all of the representatives’ concerns seriously, listened, and considered input where it had the latitude to do so, but he recognized CFPB had to include specific requirements in the Dodd-Frank Act. Nineteen representatives stated CFPB partially considered their views in its rulemaking. One of the 19 said CFPB appeared to have heard the representatives and took some things into consideration, but he felt CFPB “was on a mission and knew how they wanted the rule to be.” Another believed CFPB tried to understand her concerns but the final rule did not reconcile with the depths of her concerns; she noted some of this was due to requirements in the Dodd-Frank Act. Fifteen said CFPB did not appear to consider their views in its rulemaking. One representative said she felt defeated when she saw what came out after the panel; she thought CFPB was really listening but did not address any of the major areas of concern raised during the panel. Another thought CFPB officials already had their mind made up as to what should be in the rule. As discussed previously, CFPB uses the panel report to inform its proposed rule. When asked if CFPB amended its proposed rule based on comments the representative made during the panel or in writing (question 23), 32 believed the agency did. Although most small entity representatives felt their views were at least partially considered in the rulemakings and most felt the agency amended its final rule based on their comments, most representatives expressed disagreement with CFPB’s final rules for reasons such as increased cost of compliance. Specifically, 7 of the 57 stated they were satisfied with the final rules (question 27). CFPB officials noted that the rules for which GAO reviewed SBREFA panels were based on statutory requirements in the Dodd-Frank Act, all involving issues related to mortgage lending. In its rulemaking process, CFPB is to consider input from multiple sources and make judgments deemed necessary to accomplish the stated objectives of applicable statutes. We provided a draft of this report to CFPB, SBA’s Office of Advocacy, and OMB for their review and comment. In its written comments (reproduced in app. III), CFPB generally agreed with our findings. CFPB, SBA’s Office of Advocacy, and OMB’s Office of Information and Regulatory Affairs also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, CFPB, SBA’s Office of Advocacy, OMB, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-8678 or ShearW@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made key contributions to this report are listed in appendix IV. This report addresses (1) the extent to which the Consumer Financial Protection Bureau (CFPB) solicited, considered, and incorporated small entity inputs into its rulemakings; and (2) the views of the small entity representatives on CFPB’s rulemaking process. To assess the extent to which CFPB solicited, considered, and incorporated small business inputs into rulemakings, we interviewed and gathered information from CFPB, the Office of Advocacy at the Small Business Administration, and the Office of Information and Regulatory Affairs of the Office of Management and Budget about the process for Small Business Regulatory Enforcement Fairness Act (SBREFA) panels. We also contacted trade associations and industry participants to gain their perspectives on the SBREFA panel process. This work did not encompass any computer-generated data from agencies requiring a data reliability assessment. We also reviewed applicable laws, regulations, and guidance governing requirements for CFPB’s rulemaking process involving small business interests. We compared these requirements with CFPB’s analyses related to impacts of proposed rules on small businesses for four rulemakings. Specifically, we analyzed CFPB’s rulemaking processes and documents related to all rulemakings for which SBREFA panels were convened and final rules were issued. As of April 2016, four CFPB rulemaking efforts involving SBREFA panels had resulted in final rules. These rulemakings were focused on mortgage lending and included rules associated with Truth-in-Lending Act and Real Estate Settlement Procedures Act (TILA-RESPA) Integrated Disclosure, Mortgage Servicing (for TILA and RESPA), Mortgage Loan Originator Compensation, and the Home Mortgage Disclosure Act. To obtain the views of small entity representatives, we conducted semi- structured interviews with small entity representatives who participated in the four panels to gain a general understanding of their insights on the SBREFA process. We developed a structured interview guide to inquire about seven stages of the process: (1) notification of rulemaking, (2) outreach to small entity representatives, (3) materials to review, (4) the panel meeting, (5) CFPB consideration of small entity representatives’ comments, (6) proposed rulemaking, and (7) final rule. We asked direct questions for each stage as well as open-ended questions about activities in the stages or for any additional comments. We contacted all 69 small entity representatives who participated in the four panels and completed interviews with 57 (an 83 percent response rate). Of those for whom we did not conduct interviews, 6 declined our interview request and 6 were unavailable for an interview during our audit time frame. Table 4 below shows how many small entity representatives we interviewed by panel. A copy of the structured interview questions and results of the close-ended questions are included in appendix II. This work did not encompass any computer-generated data requiring a data reliability assessment. We conducted the interviews by telephone from February 10, 2016, to March 28, 2016. Each interview was conducted by a team of at least two analysts. To verify the information collected during the interviews, we reviewed the narrative and close-ended responses for consistency and reached consensus among analysts who conducted the interviews on the content of the interview data. The structured interviews contained a mixture of close-ended and open-ended questions. For most questions, including close-ended questions, the small entity representatives responded with a narrative answer. Based on insights from conducting the interviews and the relative importance of each question, we assigned each question into one of three categories of content analysis: (1) none, (2) Tier 1, or (3) Tier 2. None. Questions we assigned no content analysis were lower-priority questions, from which we deemed little information of value would be gleaned from conducting content analysis. Tier 1. For this analysis, two analysts independently reviewed all the small entity representatives’ responses to the question, developed their own categories of responses, and coded the responses to those categories. Then the analysts reconciled differences in their categories and coding to reach consensus. Questions we assigned Tier 1 analysis were the highest-priority questions (we deemed that information of high value would be gleaned from conducting content analysis). Tier 2. For this analysis, one analyst reviewed all the small entity representatives’ responses to the question, developed categories of responses, and coded the responses to those categories. Then another analyst reviewed the categories and coding for logic and errors. Questions we assigned to the Tier 2 analysis were priority questions (we deemed that information of value would be gleaned from conducting content analysis). We conducted this performance audit from October 2015 through August 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We provided the following interview questions and preambles to the small entity representatives prior to our interviews. At the start of each interview, we reminded the representatives that the interview was intended to collect their opinions on the SBREFA process. We also recognized that the panels had taken place several years earlier and encouraged the representatives to answer to the best of their ability. For rulemakings that require the Consumer Financial Protection Bureau (CFPB) to convene a Small Business Regulatory Enforcement Fairness Act (SBREFA) panel, CFPB must assure that small entities have an opportunity to participate in the rulemaking through reasonable techniques. 1. When did you first hear CFPB was considering rulemaking on ? 2. CFPB uses several techniques to notify small entities of its rulemaking. Which of the following techniques notified you of this particular rulemaking? Direct notification (such as a phone call or email) CFPB Outreach to Small Entity Representatives The SBREFA panel is responsible for collecting advice and recommendations from each small entity representative on issues related to CFPB’s proposed rule. However, the quality of advice and recommendations depends on how prepared small entity representatives are to participate in the panel process. CFPB’s timing and method of outreach to you and the other small entity representatives prior to your meeting with the SBREFA panel is intended to foster a more thoughtful rulemaking process. 3. When were you first asked to participate on the CFPB SBREFA panel? 4. How were you asked to participate? 5. What outreach did CFPB make to you prior to its SBREFA panel? 6. Did CFPB’s outreach efforts prepare you to provide constructive input during the SBREFA panel meeting? 7. How were CFPB’s outreach efforts constructive, or not constructive? 8. Do you have suggestions on how CFPB’s outreach could be improved? Prior to your meeting with the SBREFA panel, CFPB provided you with (1) the draft proposal, (2) discussion questions, and (3) a fact sheet on the SBREFA process. 9. Did CFPB provide any additional materials prior to your meeting with the SBREFA panel other than the three listed above? 10. From your perspective, did CFPB provide the materials with time for sufficient review prior to the SBREFA panel meeting? 11. Did the materials provided by CFPB prepare you to provide constructive input during the panel? 12. Was anything not included in the materials that could have better prepared you to participate in the panel? 13. Do you have any other comments about the meeting materials? The SBREFA panel convened a meeting in the Washington, D.C. area with you and the other small entity representatives to collect your advice and recommendations on CFPB’s proposed rule. The following questions relate to your participation on and observation of that meeting. 14. Was your industry represented among the small entity representatives who participated in the panel? 15. Did the SBREFA panel provide time to collect your advice and recommendations on each of the following: a. A description and estimate of the number of small entities to which the proposed rule will apply. b. A description of the projected reporting, recordkeeping and other compliance requirements of the proposed rule, including an estimate of the classes of small entities which will be subject to the requirement and the type of professional skills necessary for preparation of the report or record. c. An identification of all relevant federal rules which may duplicate, overlap or conflict with the proposed rule. d. Significant alternatives to the proposed rule which minimize any significant economic impact on small entities. e. Any projected increase in the cost of credit for small entities. f. Any additional topics related to the rulemaking. 16. Was there a topic or topics that are important to your business or industry that was not discussed during the panel? 17. Do you have suggestions on how CFPB’s conduct of this panel could have been improved? 18. Do you have any other comments about the panel meeting? CFPB’s Consideration of Your Comments The SBREFA panel is required to summarize the comments voiced by small entity representatives in its panel report. Additionally, small entity representatives may submit written comments to the SBREFA panel to be included in the report. CFPB must consider the SBREFA panel report as it develops its proposed rulemaking. 19. Were your views, concerns, and suggestions accurately characterized by CFPB in the SBREFA panel report? 20. Did you submit written comments to the SBREFA panel? 21. Did CFPB consider your voiced and written views, concerns, and suggestions in its rulemaking? 22. Do you have any other comments about how the panel considered your comments? CFPB’s Proposed Rulemaking When CFPB published its proposed rulemaking in the Federal Register, which included the SBREFA panel report, it initiated a public notice and comment period where small entities could submit comments based on the proposed rule. 23. Did CFPB amend its proposed rule based on comments you made during the panel or in writing? 24. Were you satisfied with the proposed rule CFPB published in the Federal Register? 25. Did you submit written comments on the proposed rule during the notice and comment period? CFPB’s Final Rule After CFPB considers the comments submitted during the notice and comment period, it can publish a final rule to the Federal Register. 26. Did CFPB amend its final rule based on comments you made during the proposed rule’s notice and comment period? 27. Were you satisfied with the final rule? 29. In view of the questions we have asked and your responses is there anything else you would like us to know about the SBREFA process for this CFPB panel or any comment you have about serving as a small entity representative? In addition to the contact named above, Debra Johnson (Assistant Director), Andrew Pauline (Assistant Director), Barry Kirby (Analyst in Charge), Timothy Bober, Alyssia Borsella, Emily Chalmers, William Chatlos, Tiffani Humble, Davis Judson, Anne Kruse, John McGrail, Marc Molino, Alexandra Martin-Arseneau, Barbara Roesmann, and Elizabeth Wood made key contributions to this report. | The Regulatory Flexibility Act, which was amended by the Dodd-Frank Act, requires CFPB to convene Small Business Review Panels (also known as SBREFA panels) for rulemaking efforts that are expected to have a significant economic impact on a substantial number of small entities. These panels are intended to seek direct input early in the rulemaking process from small entities (which can include small businesses, small not-for-profit organizations, and small governmental jurisdictions) that would be impacted by CFPB's rulemakings. This report addresses the extent to which CFPB solicited, considered, and incorporated such inputs into its rulemakings, and the views of small entity representatives on CFPB's rulemaking process. GAO analyzed and reviewed CFPB's rulemaking processes and documents and conducted semi-structured interviews with 57 of the 69 participants on four panels who agreed to be interviewed. The scope was limited to the four SBREFA panels that had associated final rules as of April 2016. GAO does not make any recommendations in this report. CFPB generally agreed with our findings. The Consumer Financial Protection Bureau (CFPB) has taken steps to solicit, consider, and incorporate inputs from small entities into its rulemaking process, as required by the Regulatory Flexibility Act, as amended by the Dodd-Frank Wall Street Reform and Consumer Protection Act (Dodd-Frank Act). GAO reviewed documents from the four Small Business Regulatory Enforcement Fairness Act (SBREFA) panels that resulted in final rulemaking as of April 2016 and found CFPB completed required steps for conducting them (see fig.). CFPB addressed required elements for regulatory analyses that are components of the proposed and final rules. Based on a review of selected rules, GAO observed that the discussion of rule proposals and alternatives focused on reactions to proposals and alternatives CFPB presented. Some alternatives that small entity representatives raised at panels were discussed in a significant alternatives section of the proposed rules, while others were not. CFPB officials noted that data needed to make a fuller assessment of some alternatives from small entities were not always available. CFPB officials, consistent with statutory requirements for CFPB rulemakings, also said alternatives that CFPB presents for a panel discussion and in a proposed rule are those they deemed significant and consistent with applicable statutes. GAO interviewed 57 of the 69 small entity representatives who participated in the four SBREFA panels GAO reviewed and found they generally believed the process was useful but also that it could be improved. More than three-quarters stated the materials CFPB provided helped prepare them to provide constructive input, and two-thirds stated their industry was represented on the panels. However, two-thirds stated not enough time was allotted to discuss at least one of the topics on the panel agenda and a third suggested more time or additional meetings would improve the process. While 36 of 57 stated CFPB at least partially considered their comments in its rulemakings, most representatives expressed disagreement with CFPB's final rules for reasons such as increased cost of compliance. Specifically, 7 of 57 were satisfied with CFPB's final rules. CFPB officials noted that the rules for which GAO reviewed SBREFA panels were based on statutory requirements in the Dodd-Frank Act. In its rulemaking process, CFPB is to consider input from multiple sources and makes judgments deemed necessary to accomplish the stated objectives of applicable statutes. |
Cigarettes continue to dominate the smoking tobacco product market, accounting for approximately 91 percent of sales in 2011. However, the use of other smoking tobacco products has increased over the past 10 years. Between 2001 and 2011, combined sales of roll-your-own tobacco, pipe tobacco, and small and large cigars grew from 3 percent of the smoking tobacco market to 9 percent. Although cigarette use in the United States is declining, it is partially offset by growing use of other smoking tobacco products. (See app. II for data on U.S. sales of smoking tobacco products from fiscal year 2001 through fiscal year 2011.) Increasing the price of tobacco products by raising excise taxes is widely recognized as an effective policy for reducing smoking prevalence across socioeconomic and racial groups. Public health and economic studies have found that adolescents are more responsive than adults to tobacco tax and price increases because they have less disposable income. However, the impact of tax increases on reducing overall smoking prevalence is likely to be weaker if smokers can turn to tobacco products that can be used as functional equivalents of factory-made cigarettes and cost significantly less, according to public health officials and academics. Smoking tobacco products are broadly defined in the IRC. Roll-your-own tobacco and pipe tobacco are defined by such factors as the use for which the product is suited and how they are offered for sale, as indicated by their appearance, type, packaging, and labeling. Cigars are differentiated from cigarettes by their wrapper and whether the product is, for a number of reasons, likely to be offered to, or purchased by, consumers as a cigarette. The tax rate for cigars is categorized into small and large cigars, which are differentiated by a weight threshold alone— small cigars are defined as weighing 3 pounds or less per thousand sticks. The definitions found in the IRC characterize five types of smoking tobacco products that are relevant to our discussion, as shown in table 1. Figure 1 shows a sample of different cigarette and cigar products. Several of the products closely resemble each other in size and shape. The three on the left are cigarettes. The first is a roll-your-own cigarette made by hand with roll-your-own tobacco. The second is a roll-your-own cigarette made in a commercial roll-your-own machine with pipe tobacco. And the third from the left is a factory-made cigarette. The three products on the right are cigars, which can vary widely in size, shape, flavor, and aroma. According to industry representatives, a nongovernmental organization, and government officials, traditionally, cigars are hand-rolled, wrapped in a tobacco leaf, large in size, and their smoke is not meant to be inhaled. However, they indicated that many small and large cigars now have filters, are wrapped in a type of paper made with tobacco, and can be similar in size and appearance to cigarettes. While the enactment of CHIPRA in 2009 represents the most recent increase in federal excise taxes on tobacco products, Congress has taxed tobacco products since its inception as a means to raise revenue. Of the smoking tobacco products that we discuss in this report, Congress taxed only cigarettes, small cigars, and large cigars prior to 1989. Congress began taxing pipe tobacco on January 1, 1989, and roll-your-own tobacco on January 1, 2000. As the danger of tobacco became better known, congressional debates surrounding tobacco taxes expanded from increasing revenue to protecting the public from health risks of tobacco. Figure 2 shows the tax rates for four smoking tobacco products from 1951 to 2010. The federal excise tax rates on different tobacco products are calculated in different ways. Cigarettes and small cigars are taxed on a unit basis— number of sticks. Roll-your-own and pipe tobacco are taxed by weight. Table 2 provides information on the different federal excise tax rates for cigarettes, roll-your-own tobacco, pipe tobacco, and small cigars before and after CHIPRA. Before CHIPRA, the federal excise tax rate on cigarettes was higher than the rates on roll-your-own tobacco, pipe tobacco, and small cigars. However, CHIPRA significantly raised the tax rates on these four products and equalized the rates on cigarettes, roll-your-own tobacco, and small cigars (see fig. 3). Congress equalized the tax rates on roll- your-own tobacco and small cigars with the cigarette tax rate in part in response to concerns that smokers had been using these two products as substitutes for higher-taxed factory-made cigarettes, according to nongovernmental organizations. CHIPRA also raised the federal excise tax rate on pipe tobacco, but to a rate that is considerably lower. Prior to CHIPRA, the tax rate on roll-your-own tobacco and pipe tobacco was the same. CHIPRA significantly changed the federal excise tax rate on large cigars. Large cigars are unique among tobacco products in that the tax rate is ad valorem—a percentage of the manufacturer’s or importer’s sale price per thousand sticks—up to a maximum tax per thousand sticks. Before CHIPRA, large cigars were taxed at 20.72 percent of the manufacturer’s or importer’s sale price up to a maximum tax of $48.75 per thousand sticks. After CHIPRA, the ad valorem rate increased to 52.75 percent of the manufacturer’s or importer’s sale price, and the maximum tax per thousand sticks increased to $402.60 (see table 3). According to an industry association, the retail prices of premium handmade large cigars range from $3 to $20. A public health organization noted that smaller factory-made cigars that meet the legal definition of a large cigar can cost as little as $0.07 per cigar. Figure 4 illustrates the tax structure for large cigars, before and after CHIPRA and includes three different scenarios. The sloped line represents the ad valorem rate, which becomes flat when it reaches the maximum tax per thousand cigars. The following are examples of the federal excise taxes manufacturers and importers would have to pay for differently priced large cigars, before and after CHIPRA (see examples corresponding with fig. 4): A. If the manufacturer’s or importer’s sale price per thousand large cigars is $100, before CHIPRA the ad valorem tax rate was $20.72 per thousand; after CHIPRA it became $52.75 per thousand. B. If the manufacturer’s or importer’s sale price per thousand large cigars is $500, before CHIPRA the tax rate was the maximum tax of $48.75 per thousand; after CHIPRA it became $263.75 per thousand based on the new ad valorem tax rate. C. If the manufacturer’s or importer’s sale price per thousand large cigars is $800, before CHIPRA the tax rate was the maximum tax of $48.75 per thousand; after CHIPRA it became $402.60, which is the new maximum tax rate per thousand. Treasury is responsible for administering and collecting the federal excise tax on all tobacco products, among other things. In general, federal excise taxes are collected when tobacco products leave the domestic factory or, in the case of imports, when the products are released from customs custody. Tobacco manufacturers and importers are required to obtain a Treasury permit to operate and must comply with Treasury’s recordkeeping, reporting, and other requirements. Tobacco product wholesalers and distributors are responsible for paying state and local excise taxes, but they are not required to obtain a Treasury permit and are not subject to Treasury recordkeeping requirements. Figure 5 shows the major steps in the tobacco supply chain, including the key points at which taxes are paid. In the Tobacco Control Act passed in June 2009, Congress amended the Food, Drug, and Cosmetic Act by inserting a chapter governing tobacco products and granting FDA authority to regulate the manufacture, distribution, and marketing of tobacco products under that chapter. The act aims to, among other things, reduce the use of tobacco products to decrease health risks and social costs associated with tobacco-related diseases. It recognizes that virtually all new users of tobacco products are adolescents under the age of 18. According to the law, FDA’s regulation of tobacco products is based, in part, on a public health standard rather than the safety and effectiveness standard by which FDA regulates pharmaceutical drugs and medical devices. For example, FDA can issue restrictions on the sale, distribution, advertising, and promotion of a tobacco product, if the public health standard is met. This standard requires FDA to demonstrate that the proposed regulation is appropriate for the protection of public health, based on a consideration of the risks and benefits to the population as a whole, including users and nonusers of tobacco products. The act specifies that FDA’s authority over tobacco products under Chapter IX of the Food, Drug, and Cosmetic Act shall apply to cigarettes, roll-your-own tobacco, cigarette tobacco, and smokeless tobacco, as well as any other tobacco products that the agency deems by regulation to be subject to such authority. FDA does not at present regulate pipe tobacco and small and large cigars. To implement the Tobacco Control Act, FDA has established the Center for Tobacco Products. Large federal excise tax disparities among tobacco products resulting from CHIPRA caused sizable market shifts from higher to lower-taxed products. According to our analysis and interviews with knowledgeable sources, the tax disparities created incentives for price sensitive manufacturers and consumers to substitute higher-taxed products with lower-taxed products. The market for roll-your-own tobacco shifted to pipe tobacco and the growth rate of the combined market increased after CHIPRA. Roll-your-own tobacco manufacturers shifted to pipe tobacco with minimal, if any, changes to the products, and consumers substituted pipe tobacco for use in roll-your-own cigarettes. At the same time, the cigar market shifted from small to large cigars, and the combined cigar market continued to grow after CHIPRA. Market trends for roll-your-own and pipe tobacco changed immediately after CHIPRA, with sales of pipe tobacco rising steeply while sales of roll- your-own tobacco plummeted. According to government officials and representatives of industry and nongovernmental organizations, manufacturers and consumers switched to lower-taxed pipe tobacco to make roll-your-own cigarettes. After CHIPRA, the federal excise tax on roll-your-own tobacco was over $20 per pound more than the tax on pipe tobacco, whereas before CHIPRA, the taxes on both products were the same. Figure 6 shows the market shift through monthly sales of roll-your- own and pipe tobacco from fiscal year 2001 through fiscal year 2011. Total annual sales of pipe tobacco grew from approximately 3.2 million pounds in fiscal year 2008, the last year before CHIPRA, to 30.5 million pounds in fiscal year 2011, representing an increase of about 869 percent. Over the same period, total annual sales of roll-your-own tobacco declined from approximately 19.7 million pounds to 5.2 million pounds, a decrease of about 74 percent. According to the representatives of industry and nongovernmental organizations we interviewed, the shift can be mostly attributed to consumers switching from using roll-your-own tobacco to pipe tobacco in roll-your-own cigarettes, rather than to a sudden increase in pipe smoking. CHIPRA’s increase in the federal excise tax for roll-your-own tobacco did not dampen the overall sales of roll-your-own and pipe tobacco. Instead, the combined sales of roll-your-own and pipe tobacco increased because of the rapid growth in pipe tobacco sales following CHIPRA. Before CHIPRA, from October 2000 through March 2009, the combined average monthly growth rate was 0.63 percent; after CHIPRA, the combined average monthly growth rate increased to 2.00 percent. See Figure 7 for the trends in combined sales of roll-your-own and pipe tobacco from fiscal year 2001 through fiscal year 2011. According to government officials, representatives of nongovernmental organizations, and industry, after CHIPRA many manufacturers of roll- your-own tobacco switched to producing pipe tobacco in order to avoid higher taxes. According to these representatives and government officials, the new pipe tobacco products have minimal, if any, differences from roll-your-own tobacco. Roll-your-own tobacco and pipe tobacco are defined in the IRC by such factors as the use for which the product is suited and how they are offered for sale, as indicated by their appearance, type, packaging, and labeling. To meet the definition of pipe tobacco in the IRC and Treasury’s regulations, a product must be clearly labeled as pipe tobacco and not indicate other uses. The definitions of tobacco products in the IRC do not specify physical characteristics that would differentiate pipe tobacco from roll-your-own tobacco. Representatives of industry and nongovernmental organizations provided examples of current pipe tobacco brands that had been roll-your-own brands prior to CHIPRA, with minimal differences in the packaging and the appearance of the tobacco itself. We also found examples of Internet retailers signaling to customers in their marketing that pipe tobacco was suitable for smoking in roll-your-own cigarettes. One manufacturer of pipe tobacco had designed its label with three-letter markings, to indicate to customers the product’s similarity to brand-name cigarettes. For example, the marking MRD indicated Marlboro Red and CML indicated Camel Light. We approached 15 pipe tobacco manufacturers to ask about their companies’ actions in response to the CHIPRA tax changes. Each of the three tobacco manufacturers that agreed to speak with us explained that their companies switched from selling higher-taxed roll-your-own tobacco to lower-taxed pipe tobacco in order to stay competitive. One company changed the cut of its roll-your-own tobacco and labeled it as pipe tobacco, although a company representative acknowledged that there was no real difference between its pipe-cut tobacco and its roll-your-own tobacco. A representative from another company that switched from selling roll-your-own tobacco to selling pipe tobacco stated that she was not aware of any difference in the two products other than the federal excise tax rate. Data show that the total number of companies exclusively manufacturing pipe tobacco increased significantly since CHIPRA, while the number of companies exclusively manufacturing roll-your-own tobacco decreased sharply. Treasury emphasized that it is unclear whether these manufacturers modified their roll-your-own tobacco beyond reclassifying it as pipe tobacco. Data also show the number of companies producing both roll-your-own and pipe tobacco has slowly increased since 2007 (see fig. 8). The rise in pipe tobacco sales coincided with the growing availability of commercial roll-your-own machines. Treasury officials stated that there has recently been significant growth in commercial roll-your-own machines. These machines enable customers to produce a carton of cigarettes using pipe tobacco and cigarette-paper tubes with filters. By using pipe tobacco instead of roll-your-own tobacco, customers are able to save almost $9 per carton in federal excise taxes. A common commercial roll-your-own machine can produce a carton of cigarettes in less than 10 minutes, providing a significant time saving compared with making roll-your-own cigarettes by hand. During our visit to a tobacco outlet store in Maryland, we used a commercial roll-your-own machine to make a carton of 200 cigarettes using pipe tobacco in about 8 minutes. We made a video showing this machine being used to make cigarettes (See http:www.gao.gov/multimedia/video#video_id=589493). The carton we made in Maryland cost about $25, which included state and federal excise taxes. The total price of $25 for our carton was about half the price of a carton of discount cigarettes in nearby stores that sold tobacco (see fig. 9). CHIPRA’s 2009 changes in federal excise tax rates on tobacco products also resulted in an immediate shift in the cigar market, with sales of lower- taxed large cigars rising sharply while sales of higher-taxed small cigars dropped. Figure 10 shows the market shift through monthly sales of small and large cigars from fiscal year 2001 through fiscal year 2011. Total annual sales of large cigars increased from approximately 4.8 billion sticks in fiscal year 2008 to about 10.3 billion sticks in fiscal year 2011, representing an increase of about 116 percent. For the same period, the total annual sales of small cigars declined from 5.3 billion sticks to 0.8 billion sticks, a decrease of 85 percent. According to government officials and representatives of nongovernmental organizations, because weight is the only characteristic that distinguishes small cigars from large cigars, many cigar manufacturers made their small cigars slightly heavier to qualify for the large cigar tax rate and avoid higher taxes levied on small cigars after CHIPRA. Figure 10 shows an increase in large cigar sales in the months immediately prior to the tax change. Treasury officials stated that although they have not specifically investigated the cause of this increase, there was an incentive for retailers and wholesalers to purchase and stockpile large cigars after the date CHIPRA was signed into law (February 4, 2009) and before the tax increase went into effect (April 1, 2009). In addition, these officials noted that a floor stocks tax is typically imposed to prevent stockpiling just before a tax increase, but the floor stocks tax imposed by CHIPRA did not apply to large cigars. The combined sales for small and large cigars continued to increase after CHIPRA, though at a slightly lower rate. Before CHIPRA, from October 2001 through March 2009, the combined average monthly growth rate was 0.75 percent, compared with a 0.17 percent growth rate from April 2009 through September 2011. See figure 11 for trends in overall cigar sales from fiscal year 2001 through fiscal year 2011. While tax revenue collected for all smoking tobacco products from April 2009 through the end of fiscal year 2011 amounted to $40 billion, we estimate that the market shifts from roll-your-own to pipe tobacco and from small to large cigars reduced federal revenue by a range of approximately $615 million to $1.1 billion for the same period. We estimated what the effect on tax revenue collection would have been if the sales trends for roll-your-own and pipe tobacco and for small and large cigars had not been affected by substitution between the products but had been affected by the increase in price due to the tax—in other words, if the market shifts resulting from the substitution of higher-taxed products with lower-taxed products had not occurred. In this report, we refer to this estimated effect on federal tax revenue collection as revenue losses. Although Treasury has taken steps to respond to these market shifts, it has limited options. For example, Treasury has pursued differentiating between roll-your-own and pipe tobacco for tax collection purposes but faces challenges because the definitions of the two products in the IRC do not specify distinguishing physical characteristics. Furthermore, Treasury also has limited options to address the market shift to large cigars. We estimated that federal revenue losses due to the market shifts from roll-your-own to pipe tobacco and from small to large cigars range from $615 million to $1.1 billion. This range includes combined tax revenue losses for the roll-your-own and pipe tobacco markets, as well as the small and large cigar markets. We conducted analyses of data from Treasury and the Bureau of Labor Statistics to estimate tax revenue losses in these markets. Our methodology takes into account the expected fall in demand for a product following a price increase, holding other variables constant. To calculate the range of federal revenue losses, we included high and low estimates based on assumptions about the effect of a price increase on projected sales. Economic studies show that, when the price of a product increases, the quantity demanded for the product will adjust downward, decreasing at an estimated rate based on the quantity demanded for the product, that is, price elasticity. Based on our interviews with government officials and academics and our literature review, we determined that the price elasticity for the smoking tobacco products ranges from -0.6 to -0.3 for the low and high revenue estimates, respectively. Our projections also take into account the historic sales trends for these products and the tax component of the price. Appendix I contains more information on our methodology for developing these estimates. Treasury collected $573 million in tax revenue from roll-your-own and pipe tobacco from April 2009 through September 2011. We estimate that during the same period the market shift from roll-your-own to pipe tobacco reduced federal revenues by between $255 million and $492 million (see fig. 12). Treasury collected $1.7 billion in tax revenue from small and large cigars from April 2009 through fiscal year 2011. We estimate that during that same period the market shift from small to large cigars reduced federal revenue by between $360 million to $559 million (see fig. 13). Differentiating between roll-your-own and pipe tobacco for tax collection purposes presents challenges to Treasury because the definitions of the two products in the IRC are based on such factors as the use for which they are suited and how they are packaged and labeled for consumers and do not specify distinguishing physical characteristics. Treasury officials and representatives of nongovernmental organizations we spoke with stated that because the two products were taxed at the same rate prior to CHIPRA, there was no revenue-related reason to clarify the differences between the two products beyond the existing statutory definitions. However, according to Treasury comments in the Federal Register, the large differences in tax rates resulting from CHIPRA created an incentive for industry members to present roll-your-own tobacco as pipe tobacco products, thus enabling them to pay a lower tax rate. After the CHIPRA tax changes and the market shift from roll-your-own to pipe tobacco that immediately followed, Treasury took steps through rulemaking notices in an effort to more clearly differentiate the two products for tax collection purposes. However, Treasury has not yet issued a final rule to distinguish the two products based on physical characteristics. The tobacco industry members’ comments on the June 2009 temporary rule and the July 2010 advance notice of proposed rulemaking highlighted the complexity and difficulties in developing objective standards that clearly differentiate the two tobacco products. Treasury also issued a ruling determining that retail establishments that make cigarette-making machines available for use by customers are manufacturers of tobacco products. However, a U.S. District Court enjoined Treasury’s enforcement of the ruling pending the outcome of a court case on this ruling, which was still pending as of March 2012. Table 4 summarizes Treasury’s actions on roll-your-own and pipe tobacco following CHIPRA, the resulting tobacco industry comments, and the status of Treasury’s actions. Temporary rule on packaging and labeling requirements: Following the CHIPRA tax changes that took effect in April 2009, Treasury published a temporary rule in June 2009, set to expire in June 2012, that outlined new labeling and packaging requirements for roll-your- own and pipe tobacco to more clearly differentiate the two products. The temporary rule required that, to be classified as pipe tobacco, the packaging must clearly indicate the product type by bearing the words “pipe tobacco” wherever the brand name appears, and that the packaging cannot suggest a use other than as pipe tobacco. Treasury also stated in the temporary rule that it was evaluating analytical methods and other standards to differentiate between roll-your-own tobacco and pipe tobacco, and it expected to publish rulemaking proposals on this subject for comment in the future. In response to this temporary rule, Treasury received comments from tobacco industry members indicating that its new labeling and packaging requirements were insufficient to prevent the misclassification of roll- your-own tobacco as pipe tobacco and that standards to further differentiate the products were urgently needed. Treasury received comments from industry members proposing alternative standards to distinguish between roll-your-own and pipe tobacco based on physical characteristics such as moisture content, cut, and variety of tobacco used. The market shift from roll-your-own to pipe tobacco continued despite Treasury’s issuance of this temporary rule. Advance notice of proposed rulemaking on standards to differentiate roll-your-own and pipe tobacco: In July 2010, Treasury published an advance notice of proposed rulemaking issuing a request for public comments on standards and characteristics proposed by commenters to differentiate between roll-your-own and pipe tobacco, but it has not issued a subsequent rule proposing the standards it would use. In the notice, Treasury discussed the heightened need for more regulatory detail to clarify the difference between the two products and stated its primary concern that the standards be objective and enforceable. The industry members’ comments to Treasury highlighted the complexity and difficulties in developing objective standards that clearly differentiate the two tobacco products. Industry members disagreed on the standards and physical characteristics that should be implemented, with some commenters noting that the two products overlap greatly. Some industry commenters also expressed concerns that proposed standards could easily be manipulated by consumers. For example, a proposed standard for the cut width of pipe tobacco could be compromised by a consumer using basic kitchen or hardware appliances to grind wider cut tobacco into a smaller width for use in cigarettes. In August 2011, Treasury issued a second advance notice of proposed rulemaking, thereby reopening the period for receiving comments on the proposed standards. Treasury said it did so because it had received an additional set of proposed standards after the original comment period closed. Treasury received a number of additional comments, many by the same companies that commented on the earlier notices, and the comments continued to reflect significant differences within the industry on standards that define and distinguish roll-your-own tobacco from pipe tobacco. This second comment period closed in October 2011. As of March 2012, Treasury has not issued a subsequent rulemaking based on the comments received, and no anticipated issuance date has been communicated. Throughout this period, the market shift from roll-your-own to pipe tobacco has continued, with negative impacts on federal revenue. Appendix III contains a more detailed summary of the Federal Register notices issued by Treasury related to differentiating between roll-your-own and pipe tobacco and the industry comments in response to these notices. Ruling on commercial cigarette-making machines: Treasury also issued a ruling in September 2010 determining that retailers who make commercial cigarette-making machines available for use on their premises are tobacco product manufacturers and are thus subject to the permit and tax requirements of the IRC. In October 2010, RYO Machine Rental LLC, the maker of the RYO Filling Stations, sued Treasury over this ruling. In December 2010, a federal district court judge in Ohio ordered a preliminary injunction on the enforcement of the Treasury rule, and the case is currently on appeal in the U.S. Court of Appeals for the Sixth Circuit. During the period that enforcement has been delayed, several organizations told us that businesses continue to maintain these machines on their premises, and the number of machines in use has increased. These machines, which cost the retailer about $30,000 each, have also been the focus of government regulation at the state level. A number of states are taking action against commercial roll-your-own machines, including Arkansas, Michigan, New Hampshire, and West Virginia. For example, Arkansas passed a law prohibiting tobacco retailers licensed, permitted, appointed, or commissioned under Arkansas tobacco tax law from possessing or using the machines. CHIPRA’s changes to the federal excise tax rate on large cigars also present challenges to Treasury. The first challenge resulted from CHIPRA’s tax rate on the most inexpensive large cigars, which was significantly lower than its rate for small cigars. This disparity in tax rates provided an incentive for some small cigar manufacturers to make minimal changes to their product to meet the legal definition of a large cigar. The second challenge came about because CHIPRA’s rate for large cigar taxes resulted in more large cigar manufacturers and importers paying taxes based on the manufacturer’s or importer’s sale price rather than simply paying the maximum set tax rate. This added complexity to Treasury’s monitoring and enforcement of large cigar tax payments and appears to have motivated some manufacturers and importers of large cigars to restructure their market transactions to lower the taxes they have to pay. The first challenge resulted from CHIPRA’s changes to the federal excise tax rate on large cigars, which created an incentive for small cigar manufacturers to switch to making large cigars when the manufacturer’s or importer’s sale price is less than $95.42 per thousand cigars. Before CHIPRA, there was little incentive for small cigar manufacturers to alter their product to meet the definition of a large cigar. Because small cigars are taxed at a fixed rate, and large cigars are taxed at an ad valorem rate, when CHIPRA raised the small cigar tax from $1.83 per thousand to $50.33 per thousand, manufacturers of inexpensive small cigars had an incentive to change their product to fit the lower-taxed large cigar category. According to Treasury officials and other industry experts, prior to CHIPRA, many small cigars weighed close to 3 pounds per thousand sticks, which is the dividing line between small and large cigars set by the IRC. Small cigars that weighed just under or exactly 3 pounds per thousand sticks would be able to qualify as large cigars with minimal changes. After CHIPRA, the same companies could use the same machines to add a small amount of weight to their product, turning small cigars into a product legally defined and taxed as large cigars. For example, manufacturers could add weight by packing the tobacco more tightly. Some manufacturers then changed their labels from “small cigars” to “filtered cigars” or “cigars”—often with the same packaging and design. For example, if a manufacturer sold cigars for $50 per thousand before CHIPRA, by manufacturing small cigars instead of large cigars, it would pay $1.83 per thousand in taxes, a tax savings of $8.53 per thousand. After CHIPRA, the same manufacturer selling cigars for $50 per thousand would pay $26.38 per thousand in taxes, a tax savings of $23.95 per thousand, by manufacturing large cigars instead of small cigars (see fig. 14). Treasury officials stated that the agency lacks the authority to remedy the tax revenue losses resulting from manufacturers’ legitimate modifications of small cigars to qualify them for the lower tax rate on large cigars. The second challenge resulting from CHIPRA’s changes to tax rates on large cigars is the complexity that has been added to Treasury’s efforts to monitor and enforce tax payments because many more manufacturers and importers must now determine the correct tax by applying the tax rate to the manufacturer’s or importer’s sale price per stick (ad valorem) rather than simply paying the maximum set tax rate. According to Treasury officials, prior to CHIPRA, the majority of domestic manufacturers of large cigars paid the federal excise tax at the maximum rate of $48.75 per thousand cigars. Specifically, manufacturers or importers that sold large cigars priced at $235.30 per thousand and above paid the set maximum tax. The increase in the large cigar maximum tax after CHIPRA resulted in many more manufacturers and importers of large cigars paying taxes based on the ad valorem rate, according to Treasury officials. Currently, the maximum tax rate does not apply until the manufacturer’s or importer’s price is $763.22 per thousand or above, and then, the maximum rate is $402.60 per thousand. For example, if a manufacturer sold large cigars for $400 per thousand, before CHIPRA, it would pay $48.75—based on the maximum tax. After CHIPRA, the manufacturer’s tax would increase to $211 per thousand—based on the ad valorem rate. If the manufacturer is able to lower its price for the large cigar product from $400 to $300 per thousand, its tax would decrease to $158.25 per thousand, a tax savings of $52.75 per thousand. Before CHIPRA, if the manufacturer had lowered its price from $400 to $300, its tax rate would have remained at the maximum rate of $48.75 (see fig. 15). After CHIPRA, according to Treasury officials, some large cigar manufacturers and importers began to restructure their market transactions to lower the manufacturer’s or importer’s sale price for large cigars in order to obtain the tax savings of a lower ad valorem rate, creating enforcement challenges. These Treasury officials stated that some manufacturers and importers are “structuring” or “layering” sales transactions by including an additional transaction at a low price before the sale to the wholesaler or distributor, and using this low initial price to calculate the tax. This transaction is conducted with an intermediary that may have a special contract arrangement with the manufacturer or importer. A large markup may then be added to the intermediary’s subsequent sale to the wholesaler or distributor. This added transaction effectively lowers the manufacturer’s or importer’s sale price, and thus reduces the taxes collected. According to Treasury officials, these layered transactions have become more common after CHIPRA. Treasury officials noted that manufacturers and importers of large cigars have approached the agency for advice on different proposals to structure their sales transactions to lower their taxes and still comply with the law. They also stated that Treasury has not determined the legality of all of the proposals under consideration, and that while Treasury can investigate individual cases, its authority to enforce additional tax collection from these kinds of large cigar transactions is limited. Officials stated that Treasury is carefully examining the tobacco importer and manufacturer pricing arrangements and taking corrective actions where appropriate on a case by case basis. The impact of the federal excise tax increases and the resulting actions by industry to mitigate the CHIPRA tax increase on large cigars are evidenced by large cigar pricing trends. Prior to CHIPRA, the average manufacturer’s or importer’s sale price for large cigars was $244 per thousand, Treasury officials stated. After the CHIPRA tax increases, the average manufacturer’s or importer’s sale price dropped to $189 per thousand. According to Treasury officials, since large cigar federal excise taxes increased by a minimum of 155 percent, and the federal excise tax is included in the sale price, large cigar manufacturer’s and importer’s sale prices were expected to increase, not decrease. When the Tobacco Control Act amended the Food, Drug, and Cosmetic Act in June 2009, it granted FDA immediate regulatory authority over four tobacco products, including cigarettes and roll-your-own tobacco, but did not specify authority over pipe tobacco and small and large cigars. According to the law, FDA has the authority to deem by regulation any other tobacco products, including pipe tobacco and small and large cigars, to be subject to the tobacco provisions in Chapter IX of the Food, Drug, and Cosmetic Act. Deeming additional products to be subject to these tobacco provisions of the Food, Drug, and Cosmetic Act requires FDA to go through the process of developing and issuing a regulation (known as the rulemaking process). Because FDA does not currently regulate pipe tobacco and small and large cigars, these products are not subject to the tobacco product provisions in Chapter IX of the Food, Drug, and Cosmetic Act or regulations that FDA has issued since June 2009 to implement the Tobacco Control Act. Some of act’s provisions and key FDA regulations address, for example, (1) the use of characterizing flavors, (2) the sale and distribution of tobacco products, and (3) the requirements for new health warnings depicting negative health consequences of smoking: Ban on the use of characterizing flavors: FDA implemented a ban on cigarettes with characterizing flavors in September 2009 (with the exception of tobacco or menthol). However, pipe tobacco and small and large cigars—some of which look similar to cigarettes (see fig. 1)—are available in multiple flavors because this Tobacco Control Act provision does not apply to these products. Smokers can make roll- your-own cigarettes with flavored pipe tobacco and buy cigars in candy, berry, fruit, or other flavors. According to the U.S. Surgeon General, the growing popularity of cigars among younger adults (those under the age of 30) appears to be linked to the marketing of flavored tobacco products, including cigars, that might be expected to be attractive to youth. Restrictions on the sale and distribution of cigarettes and smokeless tobacco to protect children and adolescents: Pipe tobacco and small and large cigars are not subject to FDA’s rule containing numerous youth access and marketing restrictions that was issued in March 2010. One restriction generally prohibits the sale and distribution of individual cigarettes or packs containing fewer than 20 cigarettes. In contrast, cigars can be sold individually, and filtered cigars are often sold in packs containing fewer than 20. A second restriction generally requires that retail sales of cigarettes and smokeless tobacco be conducted in a direct, face-to-face exchange. This restriction does not apply to pipe tobacco and cigars, and these products are sold on the Internet. A third restriction bans brand-name sponsorship of sporting and cultural events by manufacturers, distributors, or retailers of cigarettes and smokeless tobacco and does not currently apply to pipe tobacco and cigars. A cigar company recently signed a multiyear sponsorship deal for a major collegiate sporting event, but the deal was canceled due to public pressure, as has been reported in the press. Requirements for new health warnings depicting negative health consequences of smoking: Pipe tobacco and cigar packs are not subject to FDA’s rule that requires each cigarette pack and advertisement to bear one of nine new textual warning statements accompanied by color graphics, issued in June 2011. According to the law, the new warnings must cover the top half of the front and back of cigarette packs and at least 20 percent of cigarette advertisements and must contain color graphics depicting the negative health consequences of smoking. FDA selected nine color graphic health warning messages after reviewing relevant scientific literature, 1,700 public comments, and the results of its experimental 18,000-person study to assess the effectiveness of the warnings. While the Tobacco Control Act mandates that the warnings take effect no later than 15 months after FDA issues regulations, that is, by September 2012, pending litigation may impact implementation. FDA indicated its interest in deeming additional tobacco products to be subject to the agency’s tobacco product authorities in the four recent issues of the U.S. government’s semiannual regulatory agenda. In the spring and fall 2010 agendas, FDA announced that it planned to issue a proposed rule that would deem cigars to be subject to the provisions of the Food, Drug, and Cosmetic Act. In the spring and fall 2011 agendas, FDA announced that it planned to broaden the proposed rule’s scope to encompass all products that meet the statutory definition of “tobacco product”under Chapter IX of the Food, Drug, and Cosmetic Act. The fall 2011 announcement, the most recent, indicated that the proposed rule would be issued in December 2011; however, FDA had not issued the proposed rule as of March 2012, and FDA officials told us that developing the rule is taking longer than they expected. A typical rulemaking process consists of three basic phases—initiation of rulemaking actions, development of proposed rules, and development of final rules—and involves internal review by the rulemaking agency, external review by the Office of Management and Budget, and public comments on proposed rules (fig. 16). In developing the proposed rule deeming additional products, including pipe tobacco and cigars, to be subject to the agency’s regulatory authority, FDA is in the second phase of the process. FDA officials told us that, as of March 2012, the proposed rule was undergoing review by the agency and the Department of Health and Human Services and that FDA had not yet submitted the proposed rule to the Office of Management and Budget. In a 2009 report on the federal rulemaking process, we found—based on an analysis of 16 rules at different federal agencies, including FDA—that the average time needed to initiate, develop, and complete a rulemaking was about 4 years, with considerable variation among agencies and rules. FDA will be able to exercise authority over the deemed products once the rulemaking process is completed and the final rule is published in the Federal Register. At that time, the deemed products will be subject to the provisions of Chapter IX the Food, Drug, and Cosmetic Act that are applicable to tobacco products in general. Examples of such provisions include a requirement for annual registration with FDA of establishments engaged in the manufacture of tobacco products, payment of user fees by manufacturers and importers of specified classes of tobacco products, as well as restrictions and penalties for misbranded products. However, if FDA decides to expand the scope of its existing regulations applicable to cigarettes and roll-your-own tobacco to encompass the deemed products, it will have to amend those regulations through the rulemaking process. For example, FDA would have to amend its rule covering the sale and distribution restrictions for cigarettes and smokeless tobacco in order to make it applicable to the deemed products. Federal legislation has aimed to discourage tobacco use and raise revenues by increasing excise taxes on tobacco products. In 2009, Congress passed CHIPRA, which increased taxation on all smoking tobacco products, but by different levels for pipe tobacco and for large cigars. Also in 2009, Congress passed the Tobacco Control Act, which gave FDA immediate regulatory authority over four tobacco products, including cigarettes and roll-your-own tobacco, but did not specify authority over pipe tobacco and small and large cigars. In equalizing the federal excise tax rates on small cigars and roll-your- own tobacco with the tax rate on cigarettes, CHIPRA was responding to concerns that these products were increasingly used as substitutes to factory-made cigarettes. However, by introducing large tax disparities between cigarettes, roll-your-own tobacco, and small cigars, on the one hand, and pipe tobacco and large cigars, on the other, CHIPRA has contributed to the substitution of higher-taxed tobacco products with lower-taxed products. Sales of the lower-taxed pipe tobacco and large cigars saw significant growth following CHIPRA, as manufacturers and consumers sought to take advantage of lower-taxed products. We estimate that this tax avoidance has resulted in between approximately $615 million and $1.1 billion in lost federal revenue since 2009. Treasury has not succeeded in addressing the continued tax avoidance behavior reflected in the market shifts to pipe tobacco and to large cigars. In the absence of legislative changes, Treasury has limited options for effective action. First, roll-your-own and pipe tobacco are similar and, in some cases, may be substitutable products, and the IRC lacks specificity on how they should be distinguished based on physical characteristics. Treasury is currently considering and analyzing various proposals to more clearly and objectively differentiate the two products based on their physical characteristics. However, the lack of consensus on which characteristics or criteria truly define and differentiate roll-your-own from pipe tobacco reveals the complexity and difficulty in attempting to develop standards and tests to distinguish the products from each other. In addition, there is the concern that products could easily be manipulated to negate any newly established standards or tests. Because small and large cigars are distinguished in the IRC only by weight, and because many small cigars already weighed at or close to the 3 pounds per thousand threshold for classification as large cigars, many small cigar manufacturers were able to legally shift to the lower-taxed large cigar category with minimal changes to their products. In addition, the large cigar tax structure, which consists of an ad valorem tax rate up to a maximum rate, is complex and creates an incentive to lower the manufacturer’s or importer’s sale price to avoid paying higher federal excise taxes. FDA, which implements the Tobacco Control Act, currently regulates cigarettes and roll-your-own tobacco but does not regulate pipe tobacco and small and large cigars. These regulatory disparities make pipe tobacco and small and large cigars more accessible and attractive to current and potential smokers. While FDA announced its intent to issue a proposed rule that would subject additional products, including pipe tobacco and small and large cigars, to its regulation, it had not issued the proposed rule as of March 2012. Disparities in tax rates on smoking tobacco products have negative revenue implications because they create incentives for manufacturers and consumers to substitute higher-taxed products with lower-taxed products. In light of that fact, as Congress continues its oversight of CHIPRA and Tobacco Control Act implementation, it should consider modifying tobacco tax rates to eliminate significant tax differentials between similar products. Specifically, Congress should consider equalizing tax rates on roll-your-own and pipe tobacco and, in consultation with Treasury, also consider options for reducing tax avoidance due to tax differentials between small and large cigars. We provided a draft of this report to the Secretary of the Treasury and the Secretary of Health and Human Services for their review and comment. We received technical comments from Treasury and the U.S. Department of Health and Human Services, which we have incorporated in the report as appropriate. We also received written comments from Treasury, which are reprinted in appendix IV. Treasury generally agreed with our overall conclusion that CHIPRA’s introduction of large tax disparities between similar products contributed to the substitution of higher-taxed tobacco products with lower-taxed products. Treasury also agreed with our observation concerning modifying tobacco tax rates to eliminate significant tax differentials between similar products, which is consistent with our Matter for Congressional Consideration. Treasury noted our use of the term “revenue losses” and commented that our estimates did not pertain to actual losses of revenues but rather were estimates of revenue increases that would be realized if Congress were to change the law to eliminate the tax disparities or had the market shifts due to the disparities not occurred. We state in the report that our analysis does not incorporate the hypothetical case of equal tax rates among smoking products; rather, we estimate the revenues Treasury would have collected under current law—but in the absence of the market shifts from higher-taxed products to lower-taxed products. The difference between the revenues collected under current law and our estimate of the higher revenues that would have been due in the absence of the market shifts is what we refer to as “revenue losses.” In response to Treasury’s comment about the use of this term, we note that Treasury’s Alcohol and Tobacco Tax and Trade Bureau developed its own estimates of what it termed revenue losses stemming from the market shifts involving these products, and we discuss these estimates in our report. In addition, the Alcohol and Tobacco Tax and Trade Bureau’s 2011 Annual Report uses the term revenue losses when estimating the effect of the market shifts since CHIPRA. Appendix I contains a more detailed discussion of our methodology for developing our estimates. We are sending copies of this report to the appropriate congressional committees and to the Secretaries of Health and Human Services and Treasury, and other interested parties. This report also is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3149 or gootnickd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Individuals who made key contributions to this report are listed in appendix V. The Family Smoking Prevention and Tobacco Control Act (Pub. L. No. 111-31) directed GAO to report on various aspects of cross-border and illicit trade in tobacco products, including the effects of differing tax rates applicable to tobacco products. In accordance with our agreement with Senate Committee on Health, Education, Labor, and Pensions and House Energy and Commerce Committee staff, this report provides information on the federal revenue effects of differing tax rates applicable to tobacco products. Our objectives for this report are to (1) review the market shifts among smoking tobacco products since the Children’s Health Insurance Program Reauthorization Act (CHIPRA) of 2009 went into effect on April 1, 2009; (2) examine the impact of these market shifts on federal revenue and the Department of the Treasury’s (Treasury) actions to respond; and (3) describe differences in regulation of various smoking tobacco products by the Food and Drug Administration (FDA). Our review includes smoking tobacco products that are subject to federal excise tax: cigarettes and four other tobacco products—roll-your-own tobacco (sometimes called RYO), pipe tobacco, small cigars, and large cigars. However, in analyzing the market shifts among these products, we focused solely on the four smoking tobacco products other than cigarettes. To address the three objectives in this study, we reviewed documents and interviewed agency officials from Treasury’s Alcohol and Tobacco Tax and Trade Bureau, FDA, and the Centers for Disease Control and Prevention, as well as tobacco industry members, representatives of public health and other nongovernmental organizations, and academics to obtain information on tobacco legislation and regulations, tobacco product sales trends, and consumption patterns. Tobacco industry members that we spoke with included industry associations and individual companies. We identified and contacted 15 pipe tobacco manufacturers to ask about their companies’ actions in response to the CHIPRA tax changes, and 3 of the manufacturers agreed to speak with us. We also reviewed studies analyzing the relationship between tobacco tax increases and smoking, including among youth. We also collected data from Treasury, the Bureau of Labor Statistics, and the Department of Agriculture and determined that they were sufficiently reliable for our purposes. We analyzed Treasury removals data to identify sales trends across the different tobacco products before and after CHIPRA took effect. In addition, we collected and analyzed price data and data on federal excise tax rates for roll-your-own tobacco, pipe tobacco, small cigars, and large cigars, as well as the federal tax revenue generated from their sale. We estimated what the effect on federal tax revenue collection would have been if the market shifts resulting from substitution of higher-taxed products with lower-taxed products had not occurred once CHIPRA’s higher tax rates went into effect. In this report, we refer to this estimated effect on federal tax revenue collection as revenue losses. Our analysis takes into account the expected fall in quantity demanded due to the price increases resulting from the higher federal excise tax rates that CHIPRA imposed on these smoking tobacco products. To estimate federal tax revenue losses due to market shifts after CHIPRA, we analyzed Treasury’s monthly sales and revenue data from fiscal year 2001 through fiscal year 2011 for roll-your-own and pipe tobacco and for small and large cigars. Our analysis compares the actual tobacco tax revenue collected by Treasury with a counterfactual scenario. Our counterfactual model draws from one used by Dr. Frank Chaloupka, an economist who has investigated the effect of prices and taxes on tobacco consumption in numerous publications. In particular, we follow the methodology used in a paper from January 2011 in which Dr. Chaloupka calculates the effect of raising cigarette taxes in the state of Illinois. This methodology projects the effect of a future tax increase based on the historic sales trend, the amount of the tax, and the effect of a price increase on projected sales (that is, price elasticity of demand). Our counterfactual model, then, projects post-CHIPRA sales of roll-your- own and pipe tobacco and small and large cigars according to the historic sales trends for these products, adjusted downward to account for the fall in demand due to the higher post-CHIPRA tax component of the price. To calculate the impact on demand due to the higher taxes on these products, we included high and low estimates for price elasticity. Based on our interviews with experts and a review of the relevant literature, we assumed that the price elasticity for the smoking tobacco products in our analysis ranges from -0.6 to -0.3, which set, respectively, the low and high boundaries of the estimated revenue losses. Our analysis does not incorporate the hypothetical case of equal tax rates among smoking tobacco products; rather, we estimate the revenues that Treasury would have collected in the absence of the market’s substitution of higher-taxed products with lower-taxed products. An analysis that projected the impact of equal tax rates across all smoking tobacco products would necessarily produce a much higher estimate of lost tax revenues. We did not attempt to develop such a model, however, because doing so was beyond the scope of our analysis. The reliability of any such model would depend on the assumptions made, particularly with regard to large cigars—the only tobacco product for which excise taxes are calculated as a percentage of price. Compared with determining the tax on all other tobacco products, according to Treasury, determining the tax on large cigars is extremely complex. Modeling hypothetical consumption trends for smoking tobacco products after equalizing tax rates on them would require a complex set of assumptions not necessarily grounded in reliable data. We used data from two sources to build our counterfactual model projecting post-CHIPRA sales of roll-your-own and pipe tobacco and small and large cigars. The first source is Treasury’s data from fiscal year 2001 through fiscal year 2011 on smoking tobacco product tax revenues and removals (the amount of tobacco removed for sale from the factory or released from customs). The second data source is tobacco products price data from the Bureau of Labor Statistics, which it uses to calculate the Consumer Price Index for tobacco products. The Bureau of Labor Statistics data contain retail price information collected each month throughout the country; the prices include the cost of production, markup, and excise taxes from federal, state, and local governments—shipping, handling, sales tax, and fuel surcharges have been removed from the data. For roll-your-own and pipe tobacco and for small and large cigars, we calculated an average taxable manufacturer’s or importer’s sale price for the year before CHIPRA was enacted. We then estimated the post- CHIPRA price by adding the corresponding post-CHIPRA tax to the pre- CHIPRA price. Thus, our counterfactual model includes only the effect of CHIPRA on tax revenue. To calculate the average taxable manufacturer’s or importer’s sale price for large cigars, we used Treasury’s revenue data and removals data. Treasury collects revenue data for cigars but does not collect separate revenue data for small and large cigars. However, Treasury’s removals data are separated by small and large cigars, reporting the number of sticks removed for sale from the factory or released from customs. After CHIPRA, small cigars are taxed at $50.33 per thousand sticks, whereas large cigars are taxed at 52.75 percent of the manufacturer’s or importer’s price up to a maximum tax rate per thousand sticks. We calculated small cigar revenue by multiplying the number of sticks reported in Treasury’s removals data in each month by the tax rate. We then calculated large cigar revenues by subtracting small cigar revenues from total cigar revenues. Once we had calculated the large cigar revenues, we estimated the average tax paid by dividing the large cigar revenues by the number of large cigar sticks reported in the removals data for each month and calculating the average price. From March 2007 through March 2009, the average large cigar tax collected was 4.3 cents per stick. These figures corroborate Treasury’s statement that a majority of manufacturers were paying the maximum rate. CHIPRA raised this maximum rate from 4.8 cents to 40 cents per stick. We estimated that the average taxable manufacturer’s or importer’s sale price before CHIPRA was 20.65 cents. Hence, the average tax paid after CHIPRA using the new tax rate should be 10.9 cents per cigar, and this is the number we used to estimate post- CHIPRA tax revenues in our counterfactual model. Treasury does not maintain records of the manufacturers’ and importers’ sale prices of large cigars where the manufacturer or importer paid the maximum rate, thereby making it impossible to determine the magnitude of underestimation in our model caused by the maximum rate. To describe FDA’s regulation of tobacco products under Chapter IX of the Food, Drug, and Cosmetic Act, we examined FDA’s regulatory actions and announcements and interviewed officials from FDA’s Center for Tobacco Products, including the Offices of Compliance and Enforcement, Policy, Regulations, and Science. We conducted this performance audit from March 2011 to April 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Treasury’s data on taxable removals (sales) show that the decline in cigarette sales in the last decade has been partially offset by the combined growth in sales of roll-your-own tobacco, pipe tobacco, small cigars, and large cigars. Table 5 provides annual sales data for cigarettes, roll-your-own tobacco, pipe tobacco, small cigars, and large cigars from fiscal year 2001 through fiscal year 2011. Figure 17 uses the same data to depict the concomitant decline in cigarette sales and growth in combined sales of the other four smoking tobacco products. From fiscal year 2001 through fiscal year 2011, sales of the smoking tobacco products—cigarettes, roll-your-own tobacco, pipe tobacco, small cigars, and large cigars—in the United States decreased by about 26 percent. Sales of cigarettes, which continue to dominate the market, declined by 30 percent from about 414 billion sticks in fiscal year 2001 to about 289 billion sticks in 2011. However, combined sales of roll-your- own tobacco, pipe tobacco, small cigars, and large cigars increased by 131 percent during the same period from about 12 billion sticks or cigarette stick equivalents (for roll-your-own and pipe tobacco) in fiscal year 2001 to about 29 billion sticks or cigarette stick equivalents. The share of these four products grew from 3 percent of the smoking tobacco market in fiscal year 2001 to 9 percent in fiscal year 2011. Treasury published a temporary rule and request for public comments in June 2009 that outlined new labeling and packaging requirements for roll- your-own and pipe tobacco to more clearly differentiate the two products on those bases. Treasury also noted the need for additional rulemaking on other standards and methods to differentiate the products. In response to its June 2009 rulemaking notice, industry members proposed standards to distinguish between roll-your-own and pipe tobacco based on physical characteristics. For example, Treasury received comments setting forth certain criteria for distinguishing between the products based on whether the product met a certain number of factors, including moisture content; cut width; percentage of weight consisting of reducing sugars; and percentage of weight consisting of flavoring, casing, or other nontobacco content. In July 2010, Treasury published an advance notice of proposed rulemaking issuing a request for public comments on these and other standards proposed by commenters to differentiate between roll-your-own and pipe tobacco. The industry members’ comments responding to Treasury’s 2010 request highlighted the complexity and difficulties in developing objective standards based on physical characteristics that clearly differentiate the two tobacco products. Industry members disagreed on the number of criteria that should be used and the specific thresholds for differentiating between the products. For example, while some industry members generally agreed that pipe tobacco traditionally has had a thicker cut and greater moisture content than roll-your-own tobacco, they disagreed on the specific cut width or moisture content that defines pipe tobacco. Some comments noted that the physical characteristics of the two products overlap greatly, emphasizing the numerous types of roll-your-own and pipe tobacco products on the market and various manufacturing methods, all of which make it difficult to develop concrete definitions that clearly differentiate between the two products. Other comments emphasized the challenges of conducting tests to distinguish the two products as, for example, test results can be influenced by factors such as the age of the sample used and the temperature of the facility, potentially creating different results on tests of the same tobacco products. Some industry members also proposed that Treasury take into consideration the preexisting or established pipe tobacco brands prior to CHIPRA and continue to classify them as pipe tobacco through a grandfathering clause, regardless of how the tobacco might fare in any tests based on objective standards. Other industry members disagreed, however, stating that a grandfathering clause would favor existing companies, reduce competition, and give some companies the opportunity to introduce misclassified pipe tobacco into the market without accountability. Other industry members expressed concerns that the proposed standards could easily be manipulated by consumers. For example, the tobacco cut width standard for pipe tobacco could be compromised by a consumer using a blender or coffee grinder to obtain a smaller width for use in cigarettes. Additionally, the moisture content standard could also prove to be ineffective because end users could dry out the moister pipe tobacco for use in cigarettes. After the initial public comment period closed in September 2010, Treasury did not issue a subsequent rulemaking on clarifying standards. Treasury said it received an additional proposal after the close of the comment period and, as a result, issued a second advance notice of proposed rulemaking in August 2011 reopening the period for receiving comments on the standards proposed by commenters, including the new proposal. Treasury received a number of additional comments, many by the same companies that commented on the earlier notices, and the comments continued to reflect significant differences within the industry on standards that define and distinguish roll-your-own tobacco from pipe tobacco. This second comment period closed in October 2011, and Treasury has not issued a subsequent rulemaking as of March 2012. Within the 2011 notice, Treasury also published the results of the preliminary analysis conducted by its laboratory on a sample of roll-your- own and pipe tobacco products. For this analysis, Treasury purchased a sample of products labeled as roll-your-own and pipe tobacco from local retail vendors in Maryland. These samples were purchased just prior to the CHIPRA tax increases going into effect. Treasury officials emphasized that their sample was not a representative market sample and thus not generalizable. Treasury officials stated that the purpose of the preliminary analysis was to investigate what could be learned about the initial proposed standards rather than to complete a definitive test differentiating the products or attempting to determine whether the products were roll-your-own or pipe tobacco, as they were labeled. Treasury tested for several of the proposed standards, including total reducing sugars and moisture content. Treasury’s results, in some cases, appeared to show a lack of a clear distinction between the roll-your-own and pipe tobacco samples. In addition to the individual named above, Christine Broderick, Assistant Director; Sada Aksartova; Pedro Almoguera; David Dayton; Etana Finkler; Jeremy Latimer; Grace Lui; and Alana Miller made key contributions to this report. In addition, Barbara El Osta, Joyce Evans, Marc Molino, Theresa Perkins, Jena Sinkfield, and Cynthia S. Taylor provided technical assistance. | In 2009, CHIPRA increased and equalized federal excise tax rates for cigarettes, roll-your-own tobacco, and small cigars. Though CHIPRA also increased federal excise tax rates for pipe tobacco and large cigars, it raised the pipe tobacco tax to a rate significantly below the equalized rate for the other products, and its large cigar excise tax can be significantly lower, depending on price. Treasury collects federal excise taxes on tobacco products. Also passed in 2009, the Family Smoking Prevention and Tobacco Control Act (Tobacco Control Act) granted FDA regulatory authority over tobacco products. This act directed GAO to report on trade in tobacco products, including the effects of differing tobacco tax rates. This report (1) reviews the market shifts in smoking tobacco products since CHIPRA; (2) examines the impact of the market shifts on federal revenue and Treasurys actions to respond; and (3) describes differences in FDAs regulation of various smoking tobacco products. GAO interviewed agency officials, industry members, and public health representatives. GAO analyzed tax and revenue data and reviewed relevant literature. Large federal excise tax disparities among tobacco products, which resulted from the Childrens Health Insurance Program Reauthorization Act (CHIPRA) of 2009, created opportunities for tax avoidance and led to significant market shifts by manufacturers and price sensitive consumers toward the lower-taxed products. Monthly sales of pipe tobacco increased from approximately 240,000 pounds in January 2009 to over 3 million pounds in September 2011, while roll-your-own tobacco dropped from about 2 million pounds to 315,000 pounds. For the same months, large cigar sales increased from 411 million to over 1 billion cigars, while small cigars dropped from about 430 million to 60 million cigars. According to government, industry, and nongovernmental organization representatives, many roll-your-own tobacco and small cigar manufacturers shifted to the lower-taxed products after CHIPRA to avoid paying higher taxes. While revenue collected for all smoking tobacco products from April 2009 through fiscal year 2011 amounted to $40 billion, GAO estimates that federal revenue losses due to market shifts from roll-your-own to pipe tobacco and from small to large cigars range from about $615 million to $1.1 billion for the same period. The Department of the Treasury (Treasury) has limited options to respond to these market shifts. Treasury has attempted to differentiate between roll-your-own and pipe tobacco for tax purposes but faces challenges because the definitions of the two products in the Internal Revenue Code of 1986 do not specify distinguishing physical characteristics. Treasury also has limited options to address the market shift to large cigars and faces added complexity in monitoring and enforcing tax payments due to the change in large cigar tax rates. Unlike cigarettes and roll-your-own tobacco, pipe tobacco and cigars are not currently regulated by the Food and Drug Administration (FDA) and thus are not subject to the same restrictions on characterizing flavors, sales, or distribution. In 2011, FDA indicated its intent to issue a proposed rule that would deem products meeting the statutory definition of tobacco product to be subject to FDAs regulation. However, FDA had not issued the proposed rule as of March 2012. FDA officials told GAO that developing the rule was taking longer than expected. As Congress continues its oversight of CHIPRA and Tobacco Control Act implementation, it should consider equalizing tax rates on roll-your-own and pipe tobacco and, in consultation with Treasury, consider options for reducing tax avoidance due to tax differentials between small and large cigars. Treasury generally agreed with GAOs conclusions and observations. |
USPS is a vast enterprise that delivers about 680 million pieces of mail daily to virtually every household and business in the United States through an array of services. Typical mail items—letters, flats, and parcels—may be introduced into the mailstream through mailboxes and collection boxes, thousands of drop points at customer sites, mail facilities, and other locations across the country. Once mail enters the USPS mail processing operation, it becomes part of a complex and diversified system, requiring the coordinated effort of mail processing plants and delivery units across the country. While much of mail delivery is labor intensive, most of the effort required to sort the mail for distribution has been automated by a series of high-volume machines. USPS has at least 10 different types of automated mail processors totaling more than 10,000 pieces of equipment in operation. These machines exist at various points in the mailstream and have mechanical forces that are likely to cause the release of substantial amounts of anthrax spores from a piece of mail. The October 2001 anthrax attacks raised great concerns over the security of postal employees and customers from exposure to biohazardous materials. In January 2002, Congress passed Public Law 107-117 providing USPS $500 million for emergency expenses to buy equipment for sanitizing and screening mail and to protect postal employees and customers from biohazardous material with the requirement that they develop an emergency plan. On March 6, 2002, USPS issued its Emergency Preparedness Plan. The plan discusses a variety of process changes and technology initiatives that could be applied to the threat of biohazards in the mail. In addition, the plan addresses USPS’s goals of protecting postal employees and customers from exposure to biohazardous material and safeguarding the mail system from future bioterror attacks, while maintaining current service levels. USPS plans to achieve this by developing prototypes to test and evaluate which technologies should be used together with existing mail processing equipment. To fund its efforts, USPS plans to request an additional approximately $1.8 billion for fiscal years 2002 through 2004. In response to the anthrax-laden letters that caused widespread contamination at two postal facilities, USPS began testing HEPA filters to minimize paper dust, reduce risks to employees from biohazards, and clean mail processing equipment. The Postal Service plans to deploy this technology at nearly 300 P&DC/Fs that handle outgoing mail, but is specifically testing the prototypes for this technology at its Dulles and Merrifield, Virginia P&DC/Fs. These filtration systems have been implemented to run on two major types of mail processing equipment, the Delivery Bar Code Sorter (DBCS) and the Advanced Facer-Canceller System (AFCS) at both sites. The DBCSs are computerized machines that sort letter-sized mail by using a reader to interpret an imprinted barcode, while the AFCS is a type of mail processing equipment that automatically faces letter-sized mail in a uniform orientation and cancels the postage stamps. However, issues associated with the design and effectiveness of HEPA filtration systems still need to be addressed. First, USPS has not completed necessary tests and analysis to confirm the effectiveness of HEPA filtration systems installed on mail processing equipment and, therefore, does not know whether this technology will satisfy the agency’s objectives. Second, the benefits of the HEPA air filtration system’s ability to reduce dust and clean the mail processing equipment have not been confirmed. Third, the amount of energy needed to run the HEPA systems might overwhelm the existing power supply at some P&DC/Fs and, therefore, degrade the operation of current mail processing equipment. Finally, the mail processing equipment will have to be modified in order for the filtration systems to operate effectively. To date, USPS has performed initial tests to determine the effectiveness of its HEPA system’s (1) airflow velocity and (2) ability to remove dust in the mail sorting machines. However, USPS has not yet confirmed whether its HEPA filtration system’s prototypes are designed properly to capture and contain airborne anthrax within the system and not release it into the mail processing environment. As a result, USPS does not yet know whether this technology will meet its intended objectives. USPS has performed tests to determine its HEPA filtration system’s airflow velocity, but it has not performed the necessary test to confirm whether the system can actually capture anthrax spores in a mail- processing environment. When installed correctly and in the proper environment, HEPA filters were designed to effectively capture 99.97 percent of all dust, pollen, mold spores, and bacteria at the 0.3- micron particle size that might pass through them. Because biohazard particles typically fall into the range of 1 to 10 microns, HEPA filtration may significantly reduce the number of particles that exhaust from the vacuum system into the ambient air of postal facilities. USPS has designed its air filtration equipment such that the air flows in accordance with industry standards to capture particle sizes similar to anthrax. To test the effectiveness of this design, USPS is working with the National Institute for Occupational Safety and Health to release smoke and tracer gas to verify that the air filtration equipment is working as expected. Using tracer gas confirms that the system is moving air as intended through the filters. Experts from the Environmental Protection Agency agree with this approach for testing airflow and capture velocity. However, this procedure does not test either how much anthrax is trapped in the system or the system’s effectiveness in not releasing anthrax into the mail processing environment. Without conducting tests that confirm the system’s ability to trap anthrax and not release any into the mail processing environment, the USPS has not proven that its design will meet the intent of protecting its employees and customers. According to USPS, another benefit of installing HEPA air filtration systems is that the negative air pressure (i.e., vacuum) generated by the systems may help clean the mail processing equipment. Until October 2001, USPS mail processing machines, including rollers, belts, and electronic card cages, were cleaned with compressed air—pressurized air exiting through nozzles akin to the air nozzles used to fill up tires—a generally acceptable way to blow out and clean dusty equipment. USPS maintenance personnel stated that using compressed air is the best way to clean its machines because most of the dust collects on the pinch rollers, which are hard to access using a vacuum nozzle. However, USPS banned compressed air blowing following the anthrax attacks last fall. As a result, USPS began installing HEPA systems to permanently vacuum its mail processing equipment and reduce or eliminate the need to hand vacuum the internal workings of the machines, which is the current process. USPS recently performed a test to quantify the amount of dust collected by the HEPA filtration systems deployed at Dulles, but the results have not yet been analyzed. USPS gathered data from June 11 through June 25, 2002, on the amount of dust captured by the HEPA filtration systems installed at the Dulles facility. The test used data collected from four machines—two AFCSs and two DBCSs—to determine how much dust the filtration systems are actually capturing and how much dust remains in the mail processing equipment. Although one AFCS and one DBCS have a HEPA filtration system installed, the remaining two did not. The test involved using preweighed filters on four portable HEPA vacuum cleaners, which are used to clean the four machines individually. After the 2-week test period, USPS weighed the portable vacuum filters and canisters to determine how much dust the mail processing equipment collected with the HEPA filtration system versus those that did not have the prototype system. These test results are still being analyzed. While this initial testing is a positive step, we are concerned that the amount of dust collected by the portable HEPA vacuums from the mail processing machines with filtration systems will be understated because the data reflect a 24-hour period of operations versus the normal operations, which are between 7 and 16 hours depending on the type of equipment. Accordingly, the test may not provide USPS with the reliable data necessary to make valid conclusions about the efficiency of the HEPA filtration system. Given the importance of USPS’s initiative, it is imperative that reliable tests be performed to confirm whether the use of air filtration systems to clean mail processing equipment is effective. According to our preliminary observations, the HEPA filtration systems installed at the Dulles P&DC/F are collecting relatively few dust particles and may be causing the dust to settle inside the mail processing equipment. When we visited the Dulles P&DC/F, we were shown the trays where some of the dust could settle. The trays contained only rubber bands, paper clips, loose bits of paper, and mail. See figure 1 for the contents of HEPA filtration system’s tray at the Dulles P&DC/F. The Dulles P&DC/F maintenance manager stated that when maintenance personnel blew air back through the filters to purge any dust that may be trapped in them, there was no dust dislodged and the filters appeared to be clean. The bulk of the dust may be lodged in the innards of the machines and electronic equipment and not in the filters. Therefore, USPS maintenance officials are concerned that mail processing equipment, such as the DBCS, is not being cleaned as thoroughly as it was previously with the dry sweeping and compressed air blowing methods. Without an effective mechanism to clean the equipment, the dust lodged in the machines will manifest itself relatively quickly and may result in burned out pinch rollers, equipment breakdowns, and generally higher repair costs and downtime. Hence, USPS may incur additional costs for repairing equipment in the AFCS and the DBCS, and the additional maintenance may possibly affect its operations. USPS believes that installing HEPA filtration systems will minimize the risks of airborne biohazards in the event of another anthrax attack, reduce dust levels, and lessen workers’ allergy-like symptoms. Therefore, USPS is proposing the use of HEPA filtration technology as a final filtering stage to remove smaller particles that constitute airborne biohazards. However, the design and configuration of the HEPA filtration system calls for additional requirements. First, USPS has identified that the HEPA filtration systems installed at the Dulles P&DC/F require additional power to avoid affecting current mail processing equipment. At the Dulles facility, two air filtration systems— the Torit and FSX—have been installed. The Torit system is being tested on the DBCS. The FSX system is being tested on the AFCS. See figure 2 for a picture of the HEPA filtration system design at the Dulles P&DC/F and figures 3 and 4 are pictures of the FSX and Torit HEPA air filtration systems being tested at the Dulles P&DC/F. Both the FSX and Torit systems have been installed with the ductwork covering the entire AFCS and the DBCS units. The front of the DBCS is covered with plastic, and the back of the cabinet doors have channels cut into them to allow the air to flow up into the ductwork along the entire length of the machine. According to USPS officials, this design, as it is configured, presumably collects dust from all of the rollers and belts along the length of the machine and directs airborne dust to the ductwork. However, this design requires a large amount of power to generate enough airflow to move the dust through the machines. As a result, the Vice President of Engineering is concerned that the amount of energy required to run the HEPA filtration systems might overwhelm the power supply at the P&DC/F and may result in an outage if additional power is not provided. He added that the HEPA filtration system’s impact on the power supply is a serious concern, which the agency plans to address by performing site surveys to determine how much additional power is required for HEPA air filtration to operate effectively and to avoid degrading the performance of mail processing equipment. Another concern with USPS’s design of the HEPA filtration system on the mail processing equipment is that modifications must be made to each type of machine to ensure that it is automatically and continuously vacuumed and minimal dust escapes. For instance, the air from inside these machines will be filtered using HEPA filters before it is discharged back into the mail processing environment. The continuous flow of air into the equipment and the discharge of air through multistage vacuum filtration (to initially filter out larger particles to prevent their plugging the finer filters), with a final filtration through a HEPA filter, is expected to reduce the release of airborne hazards from processing equipment into the facility by several orders of magnitude. To ensure that air is routed to the HEPA filters, the AFCS and the DBCS have to be closed up with metal and plastic hoods, respectively. See figures 5 and 6 for examples of the AFCS metal hoods and DBCS plastic shrouds and figure 7 for the DBCS Torit air filtration system. USPS has not yet performed any tests to determine whether the HEPA air filtration system will impede the performance of the proposed air sampling and detection system. While HEPA filtration systems might reduce the risk of exposure to biohazards, USPS will need additional technologies to detect and identify potential hazardous materials as early as possible in the mailstream. Therefore, in addition to installing air filtration equipment, USPS is designing and installing air sampling and detection equipment to monitor airborne particles released during automated mail processing. USPS plans to use this sampling in conjunction with biohazard detection technology to confirm whether anthrax spores are present. According to USPS officials, to be most effective in collecting airborne anthrax, the air sampling and detection system must be placed directly over the automated mail processing machines, including the AFCS and the DBCS, where the anthrax dispersion is most likely to occur. The efficacy of the air sampling detection equipment, however, might be hindered since the AFCS and DBCS will have to be closed up with metal and plastic hoods, respectively, in order for the HEPA filtration equipment to function effectively. Refer to figures 5 and 6 for pictures of AFCS and DBCS with the metal and plastic hoods installed. Therefore, any HEPA filtration equipment that is installed in the P&DC/F would have to be designed so that it does not interfere with anthrax air sampling and detection system. USPS engineers recognize this requirement and stated that they would design a “dead zone,” or an area free of any negative air pressure, in the location where singular pieces of mail are processed through pinch rollers so that a proper sample can be taken by the air sampling and detection system. Consequently, until USPS tests this requirement, it will not know whether the “dead zone” design will be sufficient to ensure that an adequate sample can be collected for detection. According to industry best practices, investment analysis is a critical process required to select and fund technology investments that will result in cost-effective solutions focused on measurable and specific mission- related benefits. This process involves examining the fundamental cost, benefit, schedule, and risk characteristics of each investment before it is funded. USPS has not completed an investment analysis of its HEPA air filtration systems currently deployed at the Dulles P&DC/F and, thus, has not justified investing in HEPA filtration systems for deployment in its approximately 300 P&DC/Fs across the country. Even though the USPS has prepared cost estimates to develop and implement HEPA filtration systems at its nearly 300 P&DC/Fs across the nation, these estimates are incomplete and, therefore, are understated. USPS plans to implement the HEPA air filtration systems nationwide, at a cost of $245 million, by the end of fiscal year 2002 for air filtration on the Loose Mail system, AFCS, DBCS, and the Automated Flats Sorting Machine 100 (AFSM). A supplemental funding request of $61 million is also being considered for fiscal year 2002 to acquire additional air filtration systems on the regular and outgoing DBCS machines. When added to the $245 million already being considered for near-term purchase, the total cost of HEPA air filtration systems could increase to $306 million by September 2002. However, these amounts do not include USPS’s recurring costs including the air filtration estimate of more than $125 million annually for regular activities such as equipment maintenance, purchase of new filters, training, and updates to air filtration manuals for more than 10,000 HEPA filtration systems nationwide. Furthermore, USPS may also incur additional costs. For instance, preliminary data show that the HEPA filtration systems require more power, which results in additional costs to run these systems. According to our analysis of the initial implementation of air filtration on the Loose Mail systems, an annual cost of about $8 million will be required to power these systems. When this amount is added to expenditures associated with providing more power to support the 6,300 AFCS and DBCS units on which HEPA filtering technology will be installed, the annual cost for the extra energy required could be as high as $42 million. Furthermore, there is the potential risk for greater maintenance costs because the HEPA filtration systems and portable vacuum systems appear to be less efficient in cleaning the mail processing equipment and may result in burned out bearings and equipment parts. USPS maintenance and engineering personnel at Dulles and Merrifield informed us that there is significant potential for equipment maintenance costs to rise. For example, we analyzed the potential cost impact of bearing replacement for the DBCS machines and found that, depending on the cost of the bearing, an additional $26 to $46 million could be spent on maintenance each year. According to USPS officials, the DBCS is the largest fleet of machines the USPS owns, and they run all secondary mail. If these machines break down more often because the bearings need replacing, this could affect both costs and operations. In addition, USPS will also have to consider the risks of increased maintenance costs associated with other equipment such as the AFCS, Loose Mail, and AFSM 100, which also contain bearings. However, until USPS completes a risk assessment to determine if the bearings are wearing out faster using the new maintenance procedures, it cannot know the extent of the additional maintenance costs that will be required. With respect to benefits, USPS officials stated that the agency is reluctant to quantify benefits because it is committed to spend whatever is necessary to protect its employees from future biohazard attacks. Therefore, the officials noted that it is difficult to quantify the benefits of this technology and its ability to safeguard human life. Nevertheless, without completing required tests to confirm that the HEPA filtration systems are able to contain airborne anthrax in a mail processing environment, USPS will not know whether it is making a worthwhile investment. We recognize the challenge that USPS faces in trying to protect its workers from airborne biohazards while trying to maintain its operations and control costs. By designing and testing air filtration systems on its mail processing machines, USPS has taken steps to reduce risk of exposure from biohazards to its employees. However, the USPS HEPA air filtration system design has not yet been proven to contain anthrax spores or reduce the levels of dust in a mail processing environment and in mail processing equipment. In addition, the HEPA filtration system’s design and installation require additional energy and modifications to the mail processing equipment in order to work properly. Furthermore, USPS has not verified through testing that the HEPA air filtration system will not interfere with the air sampling and detection system. Finally, even though USPS has identified initial cost estimates, it has not yet completed investment analyses to identify the costs, benefits, and risks associated with alternative deployment scenarios for HEPA filtration systems. As a result, USPS has no assurance that investing in HEPA air filtration systems will provide adequate risk reduction to its employees. Given the magnitude of this investment and its impact on maintaining the mail processing equipment, as well as potential effects on its operations and proposed biohazard detection capabilities, it is important that the USPS show the specific performance gains attributable to this initiative before full deployment is pursued. To ensure that USPS is making a sound investment, we recommend that the Postmaster General direct the Vice President of Engineering to complete the following actions before determining whether to proceed with a large-scale rollout of air filtration systems at 300 USPS P&DC/Fs: Perform tests to determine (1) the HEPA air filtration system’s ability to trap released hazards and other contaminants and (2) what level of hazards or contaminants could be released into the mail processing environment as a result of the air filtration system’s design. Perform integrated tests with HEPA air filtration system and detection technologies being considered to determine whether the “dead zone” will impede the detection technology’s performance. Identify the effects of the HEPA filtration system’s energy consumption on mail processing equipment performance and what could be done to mitigate this risk. Complete an investment analysis to prioritize USPS’s plans to spend approximately $300 million to deploy the HEPA air filtration systems nationwide. Analyze alternative solutions, including whether maintenance costs can be reduced by using compressed air for cleaning mail processing equipment after implementing a suitable detection technology. USPS provided comments on a draft of this report in a letter dated August 9, 2002. These comments are summarized below and reproduced in appendix I. In commenting on a draft of our report, USPS shared overall concerns that (1) our report placed too much emphasis on the supposed secondary benefits of the air filtration systems, (2) their cost estimates in its Emergency Preparedness Plan are low, and (3) increased maintenance costs are not anticipated. On the other hand, USPS generally agreed with our recommendations to continue testing the system to confirm its ability to trap anthrax spores and to test for interaction between the air filtration and detection systems. Furthermore, the Service noted that detailed site surveys would be performed at each P&DC/F as part of the deployment planning process to ensure that operation of these systems will not adversely affect the P&DC/F’s power supply. USPS also commented that a Decision Analysis Report (DAR) is being prepared that will address both start-up costs to procure and deploy the equipment, as well as recurring costs such as increased electrical usage, maintenance support, spare parts, and training costs for HEPA air filtration systems. In its comments, the Service stated that it plans to submit a DAR that must be reviewed and approved by senior management and voted on by USPS’s Board of Governors prior to deployment. Finally, USPS agreed with our recommendation that it review the prohibition on using compressed air to clean mail processing equipment after effective biohazard detection systems are in place. With regard to the concern about too much emphasis on secondary benefits, USPS noted that the main purpose of adding air filtration systems to the mail processing equipment is to minimize the potential exposure risk to postal employees and customers in the event of another anthrax attack. Further, the Service stated that it does not expect the air filtration systems to eliminate the need for daily cleaning of the mail processing equipment, and that no cost reductions for reducing nuisance dust were used to justify the deployment of these systems. We modified our report to address USPS’s concern that the draft report placed too much emphasis on the secondary effects of air filtration systems. The reason we also focused on the HEPA air filtration system’s ability to clean mail processing equipment is because an additional maintenance cost of up to $46 million annually could result from installing these HEPA air filtration systems and changing maintenance practices from compressed air blowing to hand vacuuming. Furthermore, USPS’s Emergency Preparedness Plan discusses the HEPA air filtration system’s ability to clean equipment and also states that such designs for reducing nuisance dust were under way prior to the anthrax attacks. USPS’s comments additionally stated that the cost for deploying HEPA air filtration systems nationwide was based on the best information available at the time. The Service anticipates that as it moves further into testing and manufacturing, it may run into unanticipated complications that will require revisions to the cost estimates. We agree that unanticipated complications may arise and, as a result, additional funding may be required to reengineer and resolve these issues, which will most likely increase the cost to develop, deploy, and maintain the HEPA air filtration systems. Furthermore, we are concerned that the costs are understated due to the potential for increased operational costs to power the equipment. This potentially could add up to $42 million annually. The Service also had concerns relating to our finding on increased maintenance costs. The comments stated that USPS has not seen any increase in the number of machine repairs and parts replacements that were required because of dust buildup in bearings and other components and, therefore, does not foresee any increased maintenance costs. Our audit work and evidence provided to us by USPS engineers shows that bearing replacement rates have changed in the last 6 months. USPS may need to conduct more studies and analysis before it will know for sure whether the cost of the new maintenance procedures is higher or lower. With regard to USPS’s concurrence with our other recommendations, these planned actions are the appropriate steps to take. USPS plans to conduct additional testing at the Dulles P&DC/F to determine the system's effectiveness in capturing biohazards and to determine the amount of biohazards that might be released into the mail processing environment. Testing in an P&DC/F environment with particles in the 2 to 6 micron range can be used by the USPS to confirm that the system operates as designed and will provide the USPS with objective data to make appropriate modifications, if necessary, to improve the design. Finally, once the additional testing is completed, USPS plans to complete the DAR for the HEPA air filtration system and present it for management review. This should ensure that USPS management has accurate and complete information on the capabilities and cost of the air filtration system prior to making a decision on nationwide implementation. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the date of this letter. At that time, we will provide copies to interested congressional committees, the Postmaster General, and Chief Executive Officer of USPS. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions on matters discussed in this report, please contact me at (202) 512-6412 or Madhav Panwar, Director, at (202) 512-6228. We can also be reached by E-mail at rhodesk@gao.gov and panwarm@gao.gov, respectively. Individuals making key contributions to this report were Karen A. Richey, Yvette R. Banks, Teresa Anderson, Teea Kim, and Sushil Sharma. | Following the anthrax attacks of October 2001, the Unites States Postal Service (USPS) has started to look at various technologies that could be implemented in the event of another bioterror attack. The high-efficiency particulate air (HEPA) filtration system is being used as a prototype at two facilities and is planned for implementation throughout the country. HEPA filtering technology is the state-of-the-art technology for the removal of particulate biohazards and other particles of micron-sized range. USPS has not adequately tested the HEPA filtration system to confirm that it will meet its intended purpose of trapping anthrax spores and its secondary purpose of cleaning the mail processing equipment. USPS's testing has not shown conclusively (1) the HEPA filtration system's ability to trap released hazards and other contaminants, and (2) what level of hazards or contaminants could be released into the mail processing environment as a result of the air filtration system's design. Furthermore, USPS has not verified through testing that the air filtration system will not interfere with the air sampling and detection equipment. Even though HEPA filtration systems could reduce the risk of exposure to biohazards, they may negate the benefits of other technologies being considered by USPS to protect its employees and customers in the event of another anthrax attack. Finally, the design and installation of the HEPA filtration system requires custom modification to USPS equipment nationwide and will likely cost more than USPS projected in its Emergency Preparedness Plan. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.